Solutions for Goode - DFQ and LA

# Solutions for Goode - DFQ and LA - 1 Solutions to Section...

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 Solutions to Section 1.1 True-False Review: 1. FALSE. A derivative must involve some derivative of the function y = f (x), not necessarily the ﬁrst derivative. 2. TRUE. The initial conditions accompanying a diﬀerential equation consist of the values of y, y , . . . at t = 0. 3. TRUE. If we deﬁne positive velocity to be oriented downward, then dv = g, dt where g is the acceleration due to gravity. 4. TRUE. We can justify this mathematically by starting from a(t) = g , and integrating twice to get 1 v (t) = gt + c, and then s(t) = gt2 + ct + d, which is a quadratic equation. 2 5. FALSE. The restoring force is directed in the direction opposite to the displacement from the equilibrium position. 6. TRUE. According to Newton’s Law of Cooling, the rate of cooling is proportional to the diﬀerence between the object’s temperature and the medium’s temperature. Since that diﬀerence is greater for the object at 100◦ F than the object at 90◦ F , the object whose temperature is 100◦ F has a greater rate of cooling. 7. FALSE. The temperature of the object is given by T (t) = Tm + ce−kt , where Tm is the temperature of the medium, and c and k are constants. Since e−kt = 0, we see that T (t) = Tm for all times t. The temperature of the object approaches the temperature of the surrounding medium, but never equals it. 8. TRUE. Since the temperature of the coﬀee is falling, the temperature diﬀerence between the coﬀee and the room is higher initially, during the ﬁrst hour, than it is later, when the temperature of the coﬀee has already decreased. 9. FALSE. The slopes of the two curves are negative reciprocals of each other. 10. TRUE. If the original family of parallel lines have slopes k for k = 0, then the family of orthogonal tra1 jectories are parallel lines with slope − k . If the original family of parallel lines are vertical (resp. horizontal), then the family of orthogonal trajectories are horizontal (resp. vertical) parallel lines. 11. FALSE. The family of orthogonal trajectories for a family of circles centered at the origin is the family of lines passing through the origin. Problems: d2 y 1. Starting from the diﬀerential equation 2 = g , where g is the acceleration of gravity and y is the unknown dt position function, we integrate twice to obtain the general equations for the velocity and the position of the object: dy gt2 = gt + c1 and y (t) = + c1 t + c2 , dt 2 where c1 , c2 are constants of integration. Now we impose the initial conditions: y (0) = 0 implies that c2 = 0, 2 and dy dt (0) = 0 implies that c1 = 0. Hence, the solution to the initial-value problem is y (t) = gt2 . 2 The object hits the ground at the time t0 for which y (t0 ) = 100. Hence 100 = s, where we have taken g = 9.8 ms −2 gt2 0 2, so that t0 = 200 g ≈ 4.52 . d2 y 2. Starting from the diﬀerential equation 2 = g , where g is the acceleration of gravity and y is the unknown dt position function, we integrate twice to obtain the general equations for the velocity and the position of the ball, respectively: dy 1 = gt + c and y (t) = gt2 + ct + d, dt 2 where c, d are constants of integration. Setting y = 0 to be at the top of the boy’s head (and positive direction downward), we know that y (0) = 0. Since the object hits the ground 8 seconds later, we have that y (8) = 5 (since the ground lies at the position y = 5). From the values of y (0) and y (8), we ﬁnd that d = 0 5 − 32g and 5 = 32g + 8c. Therefore, c = . 8 (a) The ball reaches its maximum height at the moment when y (t) = 0. That is, gt + c = 0. Therefore, t=− 32g − 5 c = ≈ 3.98 s. g 8g (b) To ﬁnd the maximum height of the tennis ball, we compute y (3.98) ≈ −253.51 feet. So the ball is 253.51 feet above the top of the boy’s head, which is 258.51 feet above the ground. d2 y 3. Starting from the diﬀerential equation 2 = g , where g is the acceleration of gravity and y is the unknown dt position function, we integrate twice to obtain the general equations for the velocity and the position of the rocket, respectively: dy 1 = gt + c and y (t) = gt2 + ct + d, dt 2 where c, d are constants of integration. Setting y = 0 to be at ground level, we know that y (0) = 0. Thus, d = 0. (a) The rocket reaches maximum height at the moment when y (t) = 0. That is, gt + c = 0. Therefore, the c time that the rocket achieves its maximum height is t = − . At this time, y (t) = −90 (the negative sign g accounts for the fact that the positive direction is chosen to be downward). Hence, −90 = y − c g = 1 c g− 2 g 2 +c − c g = c2 c2 c2 − =− . 2g g 2g √ Solving this for c, we ﬁnd that c = ± 180g . However, since c represents the initial velocity of the rocket, and the initial velocity is negative (relative to the fact that the positive direction is downward), we choose √ c = − 180g ≈ −42.02 ms−1 , and thus the initial speed at which the rocket must be launched for optimal viewing is approximately 42.02 ms−1 . 3 (b) The time that the rocket reaches its maximum height is t = − c −42.02 ≈− = 4.28 s. g 9.81 d2 y 4. Starting from the diﬀerential equation 2 = g , where g is the acceleration of gravity and y is the unknown dt position function, we integrate twice to obtain the general equations for the velocity and the position of the rocket, respectively: dy 1 = gt + c and y (t) = gt2 + ct + d, dt 2 where c, d are constants of integration. Setting y = 0 to be at the level of the platform (with positive direction downward), we know that y (0) = 0. Thus, d = 0. (a) The rocket reaches maximum height at the moment when y (t) = 0. That is, gt + c = 0. Therefore, the c time that the rocket achieves its maximum height is t = − . At this time, y (t) = −85 (this is 85 m above g the platform, or 90 m above the ground). Hence, −85 = y − c g = 1 c g− 2 g 2 +c − c g = c2 c2 c2 − =− . 2g g 2g √ Solving this for c, we ﬁnd that c = ± 170g . However, since c represents the initial velocity of the rocket, and the initial velocity is negative (relative to the fact that the positive direction is downward), we choose √ c = − 170g ≈ −40.84 ms−1 , and thus the initial speed at which the rocket must be launched for optimal viewing is approximately 40.84 ms−1 . −40.84 c = 4.16 s. (b) The time that the rocket reaches its maximum height is t = − ≈ − g 9.81 5. If y (t) denotes the displacement of the object from its initial position at time t, the motion of the object can be described by the initial-value problem d2 y dy = g, y (0) = 0, (0) = −2, dt2 dt where g is the acceleration of gravity and y is the unknown position function. We integrate this diﬀerential equation twice to obtain the general equations for the velocity and the position of the object: dy = gt + c1 dt and y (t) = gt2 + c1 t + c2 . 2 Now we impose the initial conditions: since y (0) = 0, we have c2 = 0. Moreover, since c1 = −2. Hence the solution to the initial-value problem is y (t) = Consequently, h = dy (0) = −2, we have dt gt2 − 2t. We are given that y (10) = h. 2 g (10)2 − 2 · 10 =⇒ h = 10(5g − 2) ≈ 470 m where we have taken g = 9.8 ms−2 . 2 6. If y (t) denotes the displacement of the object from its initial position at time t, the motion of the object can be described by the initial-value problem d2 y = g, dt2 y (0) = 0, dy (0) = v0 . dt 4 We integrate the diﬀerential equation twice to obtain the velocity and position functions, respectively: dy = gt + c1 dt and y (t) = gt2 + c1 t + c2 . 2 Now we impose the initial conditions. Since y (0) = 0, we have c2 = 0. Moreover, since c1 = v0 . Hence the solution to the initial-value problem is y (t) = Consequently, h = gt2 + v0 t0 . Solving for v0 yields 0 dy (0) = v0 , we have dt gt2 + v0 t. We are given that y (t0 ) = h. 2 2h − gt2 0 . 2t0 v0 = 7. From y (t) = A cos (ωt − φ), we obtain dy = −Aω sin (ωt − φ) dt and d2 y = −Aω 2 cos (ωt − φ). dt2 Hence, d2 y + ω 2 y = −Aω 2 cos (ωt − φ) + Aω 2 cos (ωt − φ) = 0. dt2 Substituting y (0) = a, we obtain a = A cos(−φ) = A cos(φ). Also, from dy (0) = 0, we obtain 0 = dt −Aω sin(−φ) = Aω sin(φ). Since A = 0 and ω = 0 and |φ| < π , we have φ = 0. It follows that a = A. 8. Taking derivatives of y (t) = c1 cos (ωt) + c2 sin (ωt), we obtain dy = −c1 ω sin (ωt) + c2 ω cos (ωt) dt and d2 y = −c1 ω 2 cos (ωt) − c2 ω 2 sin (ωt) = −ω 2 [c1 cos (ωt) + c2 cos (ωt)] = −ω 2 y. dt2 d2 y Consequently, + ω 2 y = 0. To determine the amplitude of the motion we write the solution to the dt2 diﬀerential equation in the equivalent form: y (t) = c1 c2 + c2 1 2 c2 1 + c2 2 cos (ωt) + c2 c2 1 + c2 2 sin (ωt) . We can now deﬁne an angle φ by cos φ = c1 c2 + c2 1 2 and sin φ = c2 c2 + c2 1 2 . Then the expression for the solution to the diﬀerential equation is y (t) = c2 + c2 [cos (ωt) cos φ + sin (ωt) sin φ] = 1 2 c2 + c2 cos (ωt + φ). 1 2 Consequently the motion corresponds to an oscillation with amplitude A = c2 + c2 . 1 2 5 9. We compute the ﬁrst three derivatives of y (t) = ln t: d3 y 2 = 3. 3 dt t 1 d2 y = − 2, 2 dt t dy 1 =, dt t Therefore, 2 dy dt 3 = 2 d3 y = 3, t3 dt as required. 10. We compute the ﬁrst two derivatives of y (x) = x/(x + 1): dy 1 = dx (x + 1)2 d2 y 2 =− . dx2 (x + 1)3 and Then y+ x 2 x3 + 2x2 + x − 2 (x + 1) + (x3 + 2x2 − 3) 1 x3 + 2x2 − 3 d2 y = − = = = + , dx2 x + 1 (x + 1)3 (x + 1)3 (x + 1)3 (x + 1)2 (1 + x)3 as required. 11. We compute the ﬁrst two derivatives of y (x) = ex sin x: dy = ex (sin x + cos x) dx Then 2y cot x − and d2 y = 2ex cos x. dx2 d2 y = 2(ex sin x) cot x − 2ex cos x = 0, dx2 as required. dT d = −k , we obtain (ln |T − Tm |) = −k . The preceding equation can dt dt be integrated directly to yield ln |T − Tm | = −kt + c1 . Exponentiating both sides of this equation gives |T − Tm | = e−kt+c1 , which can be written as 12. Starting from (T − Tm )−1 T − Tm = ce−kt , where c = ±ec1 . Rearranging this, we conclude that T (t) = Tm + ce−kt . 13. After 4 p.m. In the ﬁrst two hours after noon, the water temperature increased from 50◦ F to 55◦ F, an increase of ﬁve degrees. Because the temperature of the water has grown closer to the ambient air temperature, the temperature diﬀerence |T − Tm | is smaller, and thus, the rate of change of the temperature of the water grows smaller, according to Newton’s Law of Cooling. Thus, it will take longer for the water temperature to increase another ﬁve degrees. Therefore, the water temperature will reach 60◦ F more than two hours later than 2 p.m., or after 4 p.m. 14. The object temperature cools a total of 40◦ F during the 40 minutes, but according to Newton’s Law of Cooling, it cools faster in the beginning (since |T − Tm | is greater at ﬁrst). Thus, the object cooled half-way from 70◦ F to 30◦ F in less than half the total cooling time. Therefore, it took less than 20 minutes for the object to reach 50◦ F. 6 15. Applying implicit diﬀerentiation to the given family of curves x2 + 4y 2 = c with respect to x gives 2x + 8y dy = 0. dx Therefore, x dy =− . dx 4y Therefore, the collection of orthogonal trajectories satisﬁes: dy 4y = dx x 1 dy 4 = y dx x =⇒ =⇒ d 4 (ln |y |) = dx x =⇒ ln |y | = 4 ln |x| + c1 =⇒ y = kx4 , where k = ±ec1 . y(x) 0.8 0.4 x -1.5 -0.5 -1.0 1.0 0.5 1.5 -0.4 -0.8 Figure 1: Figure for Problem 15 16. Diﬀerentiation of the given family of curves y = c with respect to x gives x c 1c y dy =− 2 =− · =− . dx x xx x Therefore, the collection of orthogonal trajectories satisﬁes: dy x = dx y =⇒ y dy =x dx =⇒ 12 y 2 d dx =x 12 1 y = x2 + c1 2 2 =⇒ where c2 = 2c1 . y(x) 4 2 x -4 -2 2 4 -2 -4 Figure 2: Figure for Problem 16 =⇒ y 2 − x2 = c2 , 7 17. Solving the equation y = cx2 for c gives c = y . Hence, diﬀerentiation leads to x2 dy 2y y = 2cx = 2 2 x = . dx x x Therefore, the collection of orthogonal trajectories satisﬁes: dy x =− dx 2y =⇒ 2y dy = −x dx d2 (y ) = −x dx =⇒ 1 y 2 = − x2 + c1 2 =⇒ =⇒ 2y 2 + x2 = c2 , where c2 = 2c1 . y(x) 2.0 1.6 1.2 0.8 0.4 x -1 -2 1 2 -0.4 -0.8 -1.2 -1.6 -2.0 Figure 3: Figure for Problem 17 18. Solving the equation y = cx4 for c gives c = y . Hence, x4 dy y 4y = 4cx3 = 4 4 x3 = . dx x x Therefore, the collection of orthogonal trajectories satisﬁes: x dy =− dx 4y =⇒ 4y dy = −x dx =⇒ d (2y 2 ) = −x dx =⇒ 1 2y 2 = − x2 + c1 2 =⇒ 4y 2 + x2 = c2 , where c2 = 2c1 . y(x) 0.8 0.4 x -1.5 1.0 -1.0 1.5 -0.4 -0.8 Figure 4: Figure for Problem 18 dy 19. Implicit diﬀerentiation of the given family of curves y 2 = 2x + c with respect to x gives 2y dx = 2. That dy 1 is, = . Therefore, the collection of orthogonal trajectories satisﬁes: dx y dy = −y dx =⇒ y −1 dy = −1 dx =⇒ d (ln |y |) = −1 dx =⇒ ln |y | = −x + c1 =⇒ y = ke−x , 8 y(x) 4 3 2 1 x -1 1 2 4 3 -1 -2 -3 -4 Figure 5: Figure for Problem 19 where k = ±ec1 . dy = cex = y . Therefore, the 20. Diﬀerentiating the given family of curves y = cex with respect to x gives dx collection of orthogonal trajectories satisﬁes: dy 1 =− dx y =⇒ y dy = −1 dx d dx =⇒ 12 y 2 = −1 =⇒ 12 y = −x + c1 2 =⇒ y 2 = −2x + c2 , where c2 = 2c1 . y(x) 2 1 x 1 -1 -1 -2 Figure 6: Figure for Problem 20 21. Diﬀerentiating the given family of curves y = mx + c with respect to x gives collection of orthogonal trajectories satisﬁes: dy 1 =− dx m =⇒ y=− dy dx = m. Therefore, the 1 x + c1 . m dy y = cmxm−1 . Since c = m , 22. Diﬀerentiating the given family of curves y = cxm with respect to x gives dx x dy my we have = . Therefore, the collection of orthogonal trajectories satisﬁes: dx x dy x =− dx my =⇒ y dy x =− dx m =⇒ d dx 12 y 2 =− x m =⇒ 12 12 y =− x +c1 2 2m =⇒ y2 = − 12 x +c2 , m 9 where c2 = 2c1 . dy 23. Implicit diﬀerentiation of the given family of curves y 2 + mx2 = c with respect to x gives 2y +2mx = 0. dx dy mx That is, =− . Therefore, the collection of orthogonal trajectories satisﬁes: dx y dy y = dx mx =⇒ y −1 dy 1 = dx mx d 1 (ln |y |) = dx mx =⇒ =⇒ m ln |y | = ln |x| + c1 =⇒ y m = c2 x, where c2 = ±ec1 . dy 24. Implicit diﬀerentiation of the given family of curves y 2 = mx + c with respect to x gives 2y = m. dx m dy = . Therefore, the collection of orthogonal trajectories satisﬁes: That is, dx 2y 2y dy =− dx m =⇒ y −1 2 dy =− dx m =⇒ d 2 (ln |y |) = − dx m =⇒ ln |y | = − 2 x+c1 m =⇒ 2x y = c2 e− m , where c2 = ±ec1 . 25. Consider the coordinate curve u = x2 + 2y 2 (i.e. u is constant). Diﬀerentiating implicitly with respect to dy dy x x gives 0 = 2x + 4y . Therefore, = − . Therefore, the collection of orthogonal trajectories satisﬁes: dx dx 2y dy 2y = dx x =⇒ y −1 dy 2 = dx x d 2 (ln |y |) = dx x =⇒ =⇒ ln |y | = 2 ln |x| + c1 where c2 = ±ec1 . y(x) 2.0 1.6 1.2 0.8 0.4 x -2 1 -1 2 -0.4 -0.8 -1.2 -1.6 -2.0 Figure 7: Figure for Problem 25 26. We have m1 = tan (a1 ) = tan (a2 − a) = tan (a2 ) − tan (a) m2 − tan (a) = . 1 + tan (a2 ) tan (a) 1 + m2 tan (a) =⇒ y = c2 x2 , 10 Solutions to Section 1.2 True-False Review: 1. FALSE. The order of a diﬀerential equation is the order of the highest derivative appearing in the diﬀerential equation. 2. TRUE. This is condition 1 in Deﬁnition 1.2.11. 3. TRUE. This is the content of Theorem 1.2.15. 4. FALSE. There are solutions to y + y = 0 that do not have the form c1 cos x + 5c2 cos x, such as y (x) = sin x. Therefore, c1 cos x + 5c2 cos x does not meet the second requirement set forth in Deﬁnition 1.2.11 for the general solution. 5. FALSE. There are solutions to y + y = 0 that do not have the form c1 cos x + 5c1 sin x, such as y (x) = cos x + sin x. Therefore, c1 cos x + 5c1 sin x does not meet the second requirement set form in Deﬁnition 1.2.11 for the general solution. 6. TRUE. Since the right-hand side of the diﬀerential equation is a function of x only, we can integrate both sides n times to obtain the formula for the solution y (x). Problems: 1. 2, nonlinear. 2. 3, linear. 3. 2, nonlinear. 4. 2, nonlinear. 5. 4, linear. 6. 3, nonlinear. 7. We can quickly compute the ﬁrst two derivatives of y (x): y (x) = (c1 +2c2 )ex cos 2x +(−2c1 + c2 )ex sin 2x and y (x) = (−3c1 +4c2 )ex cos 2x +(−4c1 − 3c2 )ex sin x. Then we have y − 2y + 5y x x = [(−3c1 + 4c2 )e cos 2x + (−4c1 − 3c2 )e sin x]−2 [(c1 + 2c2 )ex cos 2x + (−2c1 + c2 )ex sin 2x]+5(c1 ex cos 2x+c2 ex sin 2x), which cancels to 0, as required. This solution is valid for all x ∈ R. 8. We can quickly compute the ﬁrst two derivatives of y (x): y (x) = c1 ex − 2c2 e−2x and y (x) = c1 ex + 4c2 e−2x . Then we have y + y − 2y = (c1 ex + 4c2 e−2x ) + (c1 ex − 2c2 e−2x ) − 2(c1 ex + c2 e−2x ) = 0. Thus y (x) = c1 ex + c2 e−2x is a solution of the given diﬀerential equation for all x ∈ R. 11 1 1 1 = −y 2 . Thus y (x) = is y (x) = − is a solution of the given x+4 (x + 4)2 x+4 diﬀerential equation for x ∈ (−∞, −4) or x ∈ (−4, ∞). 9. The derivative of y (x) = √ 10. The derivative of y (x) = c1 x is y (x) = √ c1 y √= . Thus y (x) = c1 x is a solution of the given 2x 2x diﬀerential equation for all x > 0. 11. We can quickly compute the ﬁrst two derivatives of y (x) = c1 e−x sin (2x): y (x) = 2c1 e−x cos (2x) − c1 e−x sin (2x) and y (x) = −3c1 e−x sin (2x) − 4c1 e−x cos (2x). Therefore, we have y + 2y + 5y = −3c1 e−x sin (2x) − 4c1 e−x cos (2x) + 2[2c1 e−x cos (2x) − c1 e−x sin (2x)] + 5[c1 e−x sin (2x)] = 0, which shows that y (x) = c1 e−x sin (2x) is a solution to the given diﬀerential equation for all x ∈ R. 12. We can quickly compute the ﬁrst two derivatives of y (x) = c1 cosh (3x) + c2 sinh (3x): y (x) = 3c1 sinh (3x) + 3c2 cosh (3x) and y (x) = 9c1 cosh (3x) + 9c2 sinh (3x). Therefore, we have y − 9y = [9c1 cosh (3x) + 9c2 sinh (3x)] − 9[c1 cosh (3x) + c2 sinh (3x)] = 0, which shows that y (x) = c1 cosh (3x) + c2 sinh (3x) is a solution to the given diﬀerential equation for all x ∈ R. 13. We can quickly compute the ﬁrst two derivatives of y (x) = y (x) = − 3c1 c2 −2 x4 x and c1 c2 +: x3 x y (x) = 12c1 2c2 + 3. x5 x Therefore, we have x2 y + 5xy + 3y = x2 which shows that y (x) = x ∈ (0, ∞). 12c1 2c2 +3 5 x x + 5x − 3c1 c2 −2 4 x x +3 c1 c2 + 3 x x = 0, c2 c1 + is a solution to the given diﬀerential equation for all x ∈ (−∞, 0) or 3 x x √ 14. We can quickly compute the ﬁrst two derivatives of y (x) = c1 x + 3x2 : c1 y (x) = √ + 6x 2x and c1 y (x) = − √ + 6. 4 x3 Therefore, we have √ c1 c1 √ + 6x + (c1 x + 3x2 ) = 9x2 , 2x2 y − xy + y = 2x2 − √ + 6 − x 3 2x 4x √ which shows that y (x) = c1 x + 3x2 is a solution to the given diﬀerential equation for all x > 0. 12 15. We can quickly compute the ﬁrst two derivatives of y (x) = c1 x2 + c2 x3 − x2 sin x: y (x) = 2c1 x + 3c2 x2 − x2 cos x − 2x sin x and y (x) = 2c1 + 6c2 x + x2 sin x − 2x cos x − 2x cos −2 sin x. Substituting these results into the given diﬀerential equation yields x2 y − 4xy + 6y = x2 (2c1 + 6c2 x + x2 sin x − 4x cos x − 2 sin x) − 4x(2c1 x + 3c2 x2 − x2 cos x − 2x sin x) + 6(c1 x2 + c2 x3 − x2 sin x) = 2c1 x2 + 6c2 x3 + x4 sin x − 4x3 cos x − 2x2 sin x − 8c1 x2 − 12c2 x3 + 4x3 cos x + 8x2 sin x + 6c1 x2 + 6c2 x3 − 6x2 sin x = x4 sin x. Hence, y (x) = c1 x2 + c2 x3 − x2 sin x is a solution to the diﬀerential equation for all x ∈ R. 16. We can quickly compute the ﬁrst two derivatives of y (x) = c1 eax + c2 ebx : y (x) = ac1 eax + bc2 ebx and y (x) = a2 c1 eax + b2 c2 ebx . Substituting these results into the diﬀerential equation yields y − (a + b)y + aby = a2 c1 eax + b2 c2 ebx − (a + b)(ac1 eax + bc2 ebx ) + ab(c1 eax + c2 ebx ) = (a2 c1 − a2 c1 − abc1 + abc1 )eax + (b2 c2 − abc2 − b2 c2 + abc2 )ebx = 0. Hence, y (x) = c1 eax + c2 ebx is a solution to the given diﬀerential equation for all x ∈ R. 17. We can quickly compute the ﬁrst two derivatives of y (x) = eax (c1 + c2 x): y (x) = eax (c2 ) + aeax (c1 + c2 x) = eax (c2 + ac1 + ac2 x) and y (x) = eaax (ac2 ) + aeax (c2 + ac1 + ac2 x) = aeax (2c2 + ac1 + ac2 x). Substituting these into the diﬀerential equation yields y − 2ay + a2 y = aeax (2c2 + ac1 + ac2 x) − 2aeax (c2 + ac1 + ac2 x) + a2 eax (c1 + c2 x) = aeax (2c2 + ac1 + ac2 x − 2c2 − 2ac1 − 2ac2 x + ac1 + ac2 x) = 0. Thus, y (x) = eax (c1 + c2 x) is a solution to the given diﬀerential equation for all x ∈ R. 18. We can quickly compute the ﬁrst two derivatives of y (x) = eax (c1 cos bx + c2 sin bx): y (x) = eax (−bc1 sin bx + bc2 cos bx) + aeax (c1 cos bx + c2 sin bx) = eax [(bc2 + ac1 ) cos bx + (ac2 − bc1 ) sin bx], y (x) = eax [−b(bc2 + ac1 ) sin bx + b(ac2 + bc1 ) cos bx] + aeax [(bc2 + ac1 ) cos bx + (ac2 + bc1 ) sin bx] = eax [(a2 c1 − b2 c1 + 2abc2 ) cos bx + (a2 c2 − b2 c2 − abc1 ) sin bx]. 13 Substituting these results into the diﬀerential equation yields y − 2ay + (a2 + b2 )y = (eax [(a2 c1 − b2 c1 + 2abc2 ) cos bx + (a2 c2 − b2 c2 − abc1 ) sin bx]) − 2a(eax [(bc2 + ac1 ) cos bx + (ac2 − bc1 ) sin bx]) + (a2 + b2 )(eax (c1 cos bx + c2 sin bx)) = eax [(a2 c1 − b2 c1 + 2abc2 − 2abc2 − 2a2 c1 + a2 c1 + b2 c1 ) cos bx + (a2 c2 − b2 c2 − 2abc1 + 2abc1 − 2a2 c2 + a2 c2 + b2 c2 ) sin bx] = 0. Thus, y (x) = eax (c1 cos bx + c2 sin bx) is a solution to the given diﬀerential equation for all x ∈ R. 19. From y (x) = erx , we obtain y (x) = rerx and y (x) = r2 erx . Substituting these results into the given diﬀerential equation yields erx (r2 + 2r − 3) = 0, so that r must satisfy r2 + 2r − 3 = 0, or (r + 3)(r − 1) = 0. Consequently r = −3 and r = 1 are the only values of r for which y (x) = erx is a solution to the given diﬀerential equation. The corresponding solutions are y (x) = e−3x and y (x) = ex . 20. From y (x) = erx , we obtain y (x) = rerx and y (x) = r2 erx . Substituting these results into the given diﬀerential equation yields erx (r2 − 8r + 16) = 0, so that r must satisfy r2 − 8r + 16 = 0, or (r − 4)2 = 0. Consequently the only value of r for which y (x) = erx is a solution to the diﬀerential equation is r = 4. The corresponding solution is y (x) = e4x . 21. From y (x) = xr , we obtain y (x) = rxr−1 and y (x) = r(r − 1)xr−2 . Substituting these results into the given diﬀerential equation yields xr [r(r − 1) + r − 1] = 0, so that r must satisfy r2 − 1 = 0. Consequently r = −1 and r = 1 are the only values of r for which y (x) = xr is a solution to the given diﬀerential equation. The corresponding solutions are y (x) = x−1 and y (x) = x. 22. From y (x) = xr , we obtain y (x) = rxr−1 and y (x) = r(r − 1)xr−2 . Substituting these results into the given diﬀerential equation yields xr [r(r − 1) + 5r + 4] = 0, so that r must satisfy r2 + 4r + 4 = 0, or equivalently (r + 2)2 = 0. Consequently r = −2 is the only value of r for which y (x) = xr is a solution to the given diﬀerential equation. The corresponding solution is y (x) = x−2 . 1 1 23. From y (x) = 2 x(5x2 − 3) = 1 (5x3 − 3x), we obtain y (x) = 2 (15x2 − 3) and y (x) = 15x. Substituting 2 these results into the Legendre equation with N = 3 yields (1 − x2 )y − 2xy + 12y = (1 − x2 )(15x) + x(15x2 − 3) + 6x(5x2 − 3) = 0. Consequently, the given function is a solution to the Legendre equation with N = 3. 24. From y (x) = a0 + a1 x + a2 x2 , we obtain y (x) = a1 + 2a2 x and y (x) = 4a2 . Substituting these results into the given diﬀerential equation yields (1 − x2 )(2a2 ) − x(a1 + 2a2 x) + 4(a0 + a1 x + a2 x2 ) = 0. That is, 3a1 x +2a2 +4a0 = 0. For this equation to hold for all x we require 3a1 = 0, and 2a2 +4a0 = 0. Consequently, a1 = 0 and a2 = −2a0 . The corresponding solution to the diﬀerential equation is y (x) = a0 (1 − 2x2 ). Imposing the normalization condition y (1) = 1 requires that a0 = −1. Hence, the required solution to the diﬀerential equation is y (x) = 2x2 − 1. dy 25. Diﬀerentiating x sin y − ex = c implicitly with respect to x yields x cos y + sin y − ex = 0. Thus, dx dy ex − sin y = . dx x cos y 14 26. Diﬀerentiating xy 2 + 2y − x = c implicitly with respect to x yields 2xy dy 1 − y2 = . dx 2(xy + 1) dy dy + y2 + 2 − 1 = 0. Thus, dx dx dy + y ] − 1 = 0. Therefore, we have 27. Diﬀerentiating exy − x = c implicitly with respect to x yields exy [x dx xy dy dy 1 − ye xexy . Given that y (1) = 0, we have e0(1) − 1 = c, so that c = 0. + yexy = 1. Hence, = dx dx xexy ln x Therefore, exy − x = 0. Rearranging this equation and taking logarithms, we conclude that y = . x 28. Diﬀerentiating ey/x + xy 2 − x = c implicitly with respect to x yields dy −y dy ey/x dx 2 + 2xy + y 2 − 1 = 0. x dx x A short algebraic simpliﬁcation yields dy x2 (1 − y 2 ) + yey/x = . dx x(ey/x + 2x2 y ) 29. Diﬀerentiating x2 y 2 −sin x = c implicitly with respect to x yields 2x2 y 2 dy +2xy 2 −cos x = 0. Rearranging, dx cos x − 2xy 2 1 1 dy = . Since y (π ) = , we have π 2 − sin π = c. Therefore, c = 1. Hence, dx 2x2 y π π 1 + sin x 1 x2 y 2 − sin x = 1 so that y 2 = . Since y (π ) = , take the branch of y where x > 0, so that x2 π √ 1 + sin x y (x) = . x we obtain 30. By integrating dy = sin x with respect to x, we obtain y (x) = − cos x + c for all x ∈ R. dx 31. By integrating dy = x−1/2 with respect to x, we obtain y (x) = 2x1/2 + c for all x > 0. dx dy d2 y 32. By integrating = xex twice with respect to x, we obtain = xex − ex + c1 and y (x) = xex − dx2 dx x 2e + c1 x + c2 for all x ∈ R. 33. We consider three cases: Case 1: n = −1: In this case, two integrations with respect to x yields x ln |x| + c1 x + c2 for all x ∈ (−∞, 0) or x ∈ (0, ∞). Case 2: n = −2: In this case, two integrations with respect to x yields c1 x + c2 − ln |x| for all x ∈ (−∞, 0) or x ∈ (0, ∞). dy = ln |x| + c1 and y (x) = dx dy = −x−1 + c1 and y (x) = dx 15 Case 3: n = −1, −2: In this case, two integrations with respect to x yields xn+2 + c1 x + c2 for all x ∈ R. (n + 1)(n + 2) dy xn+1 = + c1 and y (x) = dx n+1 dy = ln x with respect to x yields y (x) = x ln x−x+c. Since y (1) = 2, we have 2 = 1(0)−1+c, dx so that c = 3. Thus, y (x) = x ln x − x + 3. 34. Integrating 35. Integrating d2 y = cos x twice with respect to x yields dx2 dy = sin x + c1 dx y (x) = − cos x + c1 x + c2 . and From y (0) = 1, we obtain c1 = 1, and from y (0) = 2, we obtain c2 = 3. Thus, y (x) = 3 + x − cos x. 36. Integrating d3 y = 6x three times with respect to x, we obtain dx3 d2 y = 3x2 + c1 , dx2 dy = x3 + c1 x + c2 , dx y (x) = 14 1 x + c1 x2 + c2 x + c3 . 4 2 From y (0) = 4, we obtain c1 = 4, from y (0) = −1, we obtain c2 = −1, and from y (0) = 1, we obtain 1 c3 = 1. Thus, y (x) = 4 x4 + 2x2 − x + 1. 37. Integrating y (x) = xex twice with respect to x, we obtain y (x) = xex − ex + c1 and y (x) = xex − 2ex + c1 x + c2 . From y (0) = 4, we obtain c1 = 5, and from y (0) = 3, we obtain c2 = 5. Thus, y (x) = xex − 2ex + 5x + 5. 38. Starting with y (x) = c1 ex + c2 e−x , we ﬁnd that y (x) = c1 ex − c2 e−x and y (x) = c1 ex + c2 e−x . Thus, y − y = 0, so y (x) = c1 ex + c2 e−x is a solution to the diﬀerential equation on (−∞, ∞). Next we establish that every solution to the diﬀerential equation has the form c1 ex + c2 e−x . Suppose that y = f (x) is any solution to the diﬀerential equation. Then according to Theorem 1.2.15, y = f (x) is the unique solution to the initial-value problem y − y = 0, y (0) = f (0), y (0) = f (0). However, consider the function y (x) = f (0) + f (0) x f (0) − f (0) −x e+ e. 2 2 This is of the form y (x) = c1 ex + c2 e−x , where c1 = f (0)+f (0) and c2 = f (0)−f (0) , and therefore solves the 2 2 diﬀerential equation y − y = 0. Furthermore, evaluation this function at x = 0 yields y (0) = f (0) and y (0) = f (0). Consequently, this function solves the initial-value problem above. However, by assumption, y (x) = f (x) solves the same initial-value problem. Owing to the uniqueness of the solution to this initial-value problem, it follows that these two solutions are the same: f (x) = c1 ex + c2 e−x . 16 Consequently, every solution to the diﬀerential equation has the form y (x) = c1 ex + c2 e−x , and therefore this is the general solution on any interval I . 39. Integrating d2 y = e−x twice with respect to x, we obtain dx2 dy = −e−x + c1 dx y (x) = e−x + c1 x + c2 . and From y (0) = 1, we obtain c2 = 0, and from y (1) = 0, we obtain c1 = − 1 . Hence, y (x) = e−x − 1 x. e e 40. Integrating d2 y = −6 − 4 ln x twice with respect to x, we obtain dx2 dy = −2x − 4x ln x + c1 dx and y (x) = −2x2 ln x + c1 x + c2 . From y (1) = 0, we obtain c1 + c2 = 0, and from y (e) = 0, we obtain ec1 + c2 = 2e2 . Solving this system 2e2 2e2 2e2 and c2 = − . Thus, y (x) = (x − 1) − 2x2 ln x. yields c1 = e−1 e−1 e−1 41. We use the general solution y (x) = c1 cos x + c2 sin x and seek values of c1 and c2 . (a) From y (0) = 0, we ﬁnd that c1 = 0, and from y (π ) = 1, we ﬁnd that 1 = c2 (0), which is impossible. Hence, we have no solutions. (b) From y (0) = 0, we ﬁnd that c1 = 0, and from y (π ) = 0, we ﬁnd that 0 = c2 (0), so c2 can be any real number. Hence, we have inﬁnitely many solutions. 42-47. Use some kind of technology to deﬁne each of the given functions. Then use the technology to simplify the expression given on the left-hand side of each diﬀerential equation and verify that the result corresponds to the expression on the right-hand side. 48. (a) Use some form of technology to substitute y (x) = a + bx + cx2 + dx3 + ex4 + f x5 where a, b, c, d, e, f are constants, into the given Legendre equation and set the coeﬃcients of each power of x in the resulting equation to zero. The result is: e = 0, 20f + 18d = 0, e + 2c = 0, 3d + 14b = 0, c + 15a = 0. 9 Now solve for the constants to ﬁnd: a = c = e = 0, d = − 14 b, f = − 10 d = 3 corresponding solution to the Legendre equation is: y (x) = bx 1 − (−1)k 2 k=0 (k !) ∞ x 2 2k = 1 − 1 x2 + 4 14 64 x Consequently, the 14 2 21 4 x+ x . 3 5 Imposing the normalization condition y (1) = 1 requires 1 = b(1 − 1 required solution is y (x) = 8 x(15 − 70x2 + 63x4 ). 49. (a) J0 (x) = 21 5 b. 14 3 + 21 5) =⇒ b = 15 8. Consequently, the + ... (b) A Maple plot of J (0, x, 4) is given in the accompanying ﬁgure. From this graph, an approximation to the ﬁrst positive zero of J0 (x) is 2.4. Using the Maple internal function BesselJZeros gives the approximation 2.404825558. 17 J(0, x, 4) 1 0.8 0.6 Approximation to the first positive zero of J0(x) 0.4 0.2 0 1 2 3 x 4 –0.2 Figure 8: Figure for Problem 49(b) (c) A Maple plot of the functions J0 (x) and J (0, x, 4) on the interval [0,2] is given in the accompanying ﬁgure. We see that to the printer resolution, these graphs are indistinguishable. On a larger interval, for example, [0,3], the two graphs would begin to diﬀer dramatically from one another. J0(x), J(0, x, 4) 1 0.8 0.6 0.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 x 2 Figure 9: Figure for Problem 49(c) (d) By trial and error, we ﬁnd the smallest value of m to be m = 11. A plot of the functions J (0, x) and J (0, x, 11) is given in the accompanying ﬁgure. J(0, x), J(0, x, 11) 1 0.8 0.6 0.4 0.2 0 2 4 6 8 10 x J(0, x) –0.2 –0.4 J(0, x, 11) Figure 10: Figure for Problem 49(d) 18 Solutions to Section 1.3 True-False Review: 1. TRUE. This is precisely the remark after Theorem 1.3.2. 2. FALSE. For instance, the diﬀerential equation in Example 1.3.6 has no equilibrium solutions. 3. FALSE. This diﬀerential equation has equilibrium solutions y (x) = 2 and y (x) = −2. 4. TRUE. For this diﬀerential equation, we have f (x, y ) = x2 + y 2 . Therefore, any equation of the form x2 + y 2 = k is an isocline, by deﬁnition. 5. TRUE. Equilibrium solutions are always horizontal lines. These are always parallel to each other. 6. TRUE. The isoclines have the form is valid. 7. dy dx x2 +y 2 2y = k , or x2 + y 2 = 2ky , or x2 + (y − k )2 = k 2 , so the statement TRUE. An equilibrium solution is a solution, and two solution curves to the diﬀerential equation = f (x, y ) do not intersect. Problems: 1. Taking the derivative of y = cx−1 , we obtain 2. Taking the derivative of y = cx2 , we obtain dy y dy = −cx−2 . Since c = xy , this becomes =− . dx dx x dy y 2y dy = 2cx. Since c = 2 , this becomes = . dx x dx x dy x2 + y 2 = 2c. Now, c = , so dx 2x 2 2 2 2 2 2 x +y dy x +y dy y −x dy = . Hence, y = − x. Consequently, = . 2x + 2y dx x dx 2x dx 2xy 3. Implicit diﬀerentiation of x2 + y 2 = 2cx with respect to x yields 2x + 2y dy y2 dy y2 4. Implicit diﬀerentiation of y 2 = cx with respect to x yields 2y = c. Since c = , we have 2y =. dx x dx x dy y Rearranging this equation, we obtain = . dx 2x 5. Implicit diﬀerentiation of 2cy = x2 − c2 with respect to x yields 2c c2 + 2cy − x2 = 0, we use the quadratic equation to obtain c= −2y ± 4y 2 + 4x2 = −y ± 2 dy dy x = 2x, so that = . Now from dx dx c x2 + y 2 . Hence, dy x x == . dx c −y ± x2 + y 2 6. Implicit diﬀerentiation of y 2 − x2 = c with respect to x yields 2y dy dy x − 2x = 0. Hence, =. dx dx y 19 7. Algebraic simpliﬁcation of (x − c)2 + (y − c)2 = 2c2 yields x2 − 2cx + y 2 − 2cy = 0. Implicit diﬀerentiation of this equation gives dy dy 2x − 2c + 2y − 2c = 0. dx dx Therefore, dy 2c − 2x c−x = = . dx 2y − 2c y−c dy x2 + y 2 . Substituting this value for c into the previous formula for and simplifying, we 2(x + y ) dx dy x2 + 2xy − y 2 obtain . =− 2 dx y + 2xy − x2 We have c = 8. Implicit diﬀerentiation of x2 + y 2 = c with respect to x yields 2x + 2y dy dy x = 0. Thus, =− . dx dx y y(x) 2.0 1.6 1.2 0.8 0.4 x -2 -1 1 -0.4 2 -0.8 -1.2 -1.6 -2.0 Figure 11: Figure for Problem 8 dy y 3y = 3cx2 = 3 3 x2 = . The initial condition dx x x 3 y (2) = 8 implies that 8 = c(2) . Hence, c = 1. Thus the unique solution to the initial-value problem is y = x3 . 9. Diﬀerentiation of y = cx3 with respect to x yields y(x) 8 (2, 8) 4 x -2 2 -4 -8 Figure 12: Figure for Problem 9 20 dy dy y2 10. Implicit diﬀerentiation of y 2 = cx with respect to x yields 2y = c. Thus, 2y = , which can be dx dx x dy y rearranged to read = . That is, 2x · dy − y · dx = 0. The initial condition y (1) = 2 implies that c = 4, dx 2x so that the unique solution to the initial-value problem is y 2 = 4x. y(x) 3 (1, 2) 1 x -3 -2 -1 1 3 2 -1 -3 Figure 13: Figure for Problem 10 11. Expanding (x − c)2 + y 2 = c2 yields x2 − 2cx + c2 + y 2 = c2 , or x2 − 2cx + y 2 = 0. Diﬀerentiating dy dy c−x x2 + y 2 dy = 0. Solving for , we have = . Now, c = , and with respect to x gives 2x − 2c + 2y dx dx dx y 2x dy y 2 − x2 substituting this into the diﬀerential equation just obtained and simplifying yields = . Imposing dx 2xy the initial condition y (2) = 2 in the equation x2 − 2cx + y 2 = 0, we ﬁnd that c = 2. Therefore, the unique solution to the initial-value problem is y = + x(4 − x). y(x) 3 (2, 2) 2 1 x 1 2 3 4 5 6 -1 -2 -3 Figure 14: Figure for Problem 11 12. Let f (x, y ) = x sin (x + y ), which is continuous for all x, y ∈ R. Then continuous for all x, y ∈ R. By Theorem 1.3.2, the initial-value problem dy = x sin (x + y ), dx has a unique solution for some interval I ∈ R. y (x0 ) = y0 ∂f = x cos (x + y ), which is ∂y 21 13. Let f (x, y ) = x (y 2 − 9), x2 + 1 which is continuous for all x, y ∈ R. Then ∂f 2xy =2 , ∂y x +1 which is continuous for all x, y ∈ R. Therefore, by the existence and uniqueness theorem, the initial-value problem stated above has a unique solution on any interval containing (0, 3). By inspection we see that y (x) = 3 is the unique solution. 14. The initial-value problem does not necessarily have a unique solution since the hypothesis of the existence ∂f = 1 xy −1/2 , and uniqueness theorem are not satisﬁed at (0,0). This follows since f (x, y ) = xy 1/2 , so that 2 ∂y which is not continuous at (0, 0). ∂f = −4xy . Both of these functions are continuous for all (x, y ), ∂y and therefore the hypothesis of the uniqueness and existence theorem are satisﬁed for any (x0 , y0 ) in the xy -plane. 15. (a) We have f (x, y ) = −2xy 2 , so (b) For y (x) = x2 1 2x , we have y (x) = − 2 = −2xy 2 . +c (x + c)2 (c) (i) From y (0) = 1, we ﬁnd that c = 1. Hence, y (x) = 1 . This is valid on the interval (−∞, ∞). x2 + 1 y(x) 1.2 0.8 0.4 x -2 2 Figure 15: Figure for Problem 15c(i) (ii) From y (1) = 1, we ﬁnd that c = 0. Hence, y (x) = 1 . This is valid on the interval (0, ∞). x2 y(x) 8 6 4 2 x 1 2 3 4 5 6 Figure 16: Figure for Problem 15c(ii) 22 (iii) From y (0) = −1, we ﬁnd that c = −1. Hence, y (x) = 1 . This is valid on the interval (−1, 1). x2 − 1 y(x) -1.0 -0.5 0.5 1.0 x -1 -2 -3 Figure 17: Figure for Problem 15c(iii) (d) Since, by inspection, y (x) = 0 satisﬁes the given initial-value problem, it must be the unique solution. ∂f = 2y − 1 are continuous at all points (x, y ). Consequently, the ∂y hypothesis of the existence and uniqueness theorem are satisﬁed by the given initial-value problem for any (x0 , y0 ) in the xy -plane. 16. (a) Both f (x, y ) = y (y − 1) and (b) Equilibrium solutions: y (x) = 0, y (x) = 1. dy d2 y = (2y − 1) = (2y − 1)y (y − 1). Hence the dx2 dx and y > 1, and concave down for y < 0 and 1 < y < 1. 2 (c) Diﬀerentiating the given diﬀerential equation yields solution curves are concave up for 0 < y < 1 2 (d) The solutions will be bounded provided 0 ≤ y0 ≤ 1. y(x) 2 1 –2 –1 0 1 2 x –1 –2 Figure 18: Figure for Problem 16(d) 17. There are no equilibrium solutions. The slope of the solution curves is positive for x > 0 and is negative for x < 0. The isoclines are the lines x = k . 4 23 Slope of Solution Curve -4 -2 0 2 4 Equation of Isocline x = −1 x = −1/2 x=0 x = 1/2 x=1 y(x) 1.5 1 0.5 –1.5 –1 0 –0.5 0.5 1 x 1.5 –0.5 –1 –1.5 Figure 19: Figure for Problem 17 18. There are no equilibrium solutions. The slope of the solution curves is positive for x > 0 and increases without bound as x → 0+ . The slope of the curve is negative for x < 0 and decreases without bound as 1 x → 0− . The isoclines are the lines x = k . Slope of Solution Curve ±4 ±2 ±1/2 ±1/4 ±1/10 Equation of Isocline x = ±1/4 x = ±1/2 x = ±2 x = ±4 x = ±10 y(x) 2 1 –2 –1 0 1 2 x –1 –2 Figure 20: Figure for Problem 18 19. There are no equilibrium solutions. The slope of the solution curves is positive for y > −x, and negative for y < −x. The isoclines are the lines y + x = k . 24 Slope of Solution Curve −2 −1 0 1 2 Equation of Isocline y = −x − 2 y = −x − 1 y = −x y = −x + 1 y = −x + 2 Since the slope of the solution curve along the isocline y = −x − 1 coincides with the slope of the isocline, it follows that y = −x − 1 is a solution to the diﬀerential equation. Diﬀerentiating the given diﬀerential equation yields: y = 1 + y = 1 + x + y . Hence the solution curves are concave up for y > −x − 1, and concave down for y < −x − 1. Putting this information together leads to the slope ﬁeld in the accompanying ﬁgure. y(x) 3 2 1 x -3 -2 1 -1 2 3 -1 -2 -3 Figure 21: Figure for Problem 19 20. There are no equilibrium solutions. The slope of the solution curves is zero when x = 0. The solution has a vertical tangent line at all points along the x-axis (except the origin). Diﬀerentiating the diﬀerential 1 x 1 x2 1 equation yields: y = − 2 y = − 3 = 3 (y 2 − x2 ). Hence the solution curves are concave up for y > 0 yy yy y and y 2 > x2 ; y < 0 and y 2 < x2 and concave down for y > 0 and y 2 < x2 ; y < 0 and y 2 > x2 . The isoclines are the lines x = k . y Slope of Solution Curve ±2 ±1 ±1/2 ±1/4 ±1/10 Equation of Isocline y = ±x/2 y = ±x y = ±2x y = ±4x y = ±10x Note that y = ±x are solutions to the diﬀerential equation. 25 y(x) 2 1 –2 0 –1 1 2 x –1 –2 Figure 22: Figure for Problem 20 21. The slope is zero when x = 0 and y = 0. The solutions have a vertical tangent line at all points along x the x-axis (except the origin). The isoclines are the lines − 4y = k . Some values are given in the table below. Slope of Solution Curve ±1 ±2 ±3 Equation of Isocline y = ±4x y = ±2x y = ±4x/3 4xy 4 16x2 4(y 2 + 4x2 ) 4 + 2 =− − 3 =− . y y y y y Consequently the solution curves are concave up for y < 0, and concave down for y > 0. Putting this information together leads to the slope ﬁeld in the accompanying ﬁgure. Diﬀerentiating the given diﬀerential equation yields: y = − y(x) 4 3 2 1 x -2 -1 1 2 Figure 23: Figure for Problem 21 22. Equilibrium solution: y (x) = 0. Therefore, no solution curve can cross the x-axis. Slope: zero when x = 0 or y = 0. Positive when y > 0 (and x = 0); Negative when y < 0 (and x = 0). Diﬀerentiating the d2 y dy given diﬀerential equation yields: = 2xy + x2 = 2xy + x4 y = xy (2 + x3 ). So, when y > 0, the solution dx2 dx curves are concave up for x ∈ (−∞, (−2)1/3 ) and for x > 0, and the solution curves are concave down for x ∈ ((−2)1/3 , 0). When y < 0, the solution curves are concave up for x ∈ ((−2)1/3 , 0), and concave down for x ∈ (−∞, (−2)1/3 ) and for x > 0. The isoclines are the hyperbolas x2 y = k . 26 Slope of Solution Curve ±2 ±1 ±1/2 ±1/4 ±1/10 0 Equation of Isocline y = ±2/x2 y = ±1/x2 y = ±1/(2x)2 y = ±1/(4x)2 y = ±1/(10x)2 y=0 y(x) 2 1 –2 0 –1 1 x 2 –1 –2 Figure 24: Figure for Problem 22 23. The slope is zero when x = 0. There are equilibrium solutions when y = (2k + 1) π . The slope ﬁeld is 2 π best sketched using technology. The accompanying ﬁgure gives the slope ﬁeld for − π < y < 32 . 2 y(x) 5 4 3 2 1 –3 –2 –1 0 1 2 3 x –1 Figure 25: Figure for Problem 23 24. The slope of the solution curves is zero at the origin, and positive at all the other points. There are no equilibrium solutions. The isoclines are the circles x2 + y 2 = k . 27 Slope of Solution Curve 1 2 3 4 5 Equation of Isocline x = ±1/4 x = ±1/2 x = ±2 x = ±4 x = ±10 y(x) 2 1 –2 –1 0 1 2 x –1 –2 Figure 26: Figure for Problem 24 1 25. Substituting the constants given in the problem, the diﬀerential equation reads dT = − 80 (T − 70). dt Equilibrium solution: T (t) = 70. The slope of the solution curves is positive for T > 70, and negative for d2 T 1 dT T < 70. Taking the derivative of both sides of the diﬀerential equation, we arrive at =− = dt2 80 dt 1 (T − 70). Hence the solution curves are concave up for T > 70, and concave down for T < 70. The 6400 1 isoclines are the horizontal lines − 80 (T − 70) = k . Slope of Solution Curve −1/4 1/5 0 1/5 1/4 Equation of Isocline T = 90 T = 86 T = 70 T = 54 T = 50 28 T(t) 100 80 60 40 20 0 20 40 60 80 t 100 120 140 Figure 27: Figure for Problem 25 26. y(x) 2 1 –2 –1 0 1 2 x –1 –2 Figure 28: Figure for Problem 26 27. y(x) 2 1 –2 –1 0 1 2 –1 –2 Figure 29: Figure for Problem 27 x 29 28. y(x) 2 1 –2 –1 0 1 2 x –1 –2 Figure 30: Figure for Problem 28 29. y(x) 3 2 1 –2 –1 0 1 2 x –1 –2 –3 Figure 31: Figure for Problem 29 30. y(x) 2 1 –2 –1 0 1 2 –1 –2 Figure 32: Figure for Problem 30 x 30 31. y(x) 2 1 –2 –1 0 1 x 2 –1 –2 Figure 33: Figure for Problem 31 32.(a) y(x) 4 2 0 2 4 6 8 10 x –2 –4 Figure 34: Figure for Problem 32(a) (b) The ﬁgure suggests that the solutions to the diﬀerential equation are unbounded as x → 0+ . y(x) 4 2 0 2 4 6 8 10 –2 –4 Figure 35: Figure for Problem 32(b) x 31 (c) y(x) 4 2 0 2 4 6 8 10 x –2 –4 Figure 36: Figure for Problem 32(c) This solution curve is bounded as x → 0+ . (d) In the accompanying ﬁgure we have sketched several solution curves on the interval (0,15]. The ﬁgure suggests that the solution curves approach the x-axis as x → ∞. y(x) 4 2 0 2 4 6 8 10 12 14 x –2 –4 Figure 37: Figure for Problem 32(d) dy y 33.(a) Diﬀerentiating the given equation gives = 2kx = 2 . Hence the diﬀerential equation satisﬁed by dx x dy x the orthogonal trajectories is =− . dx 2y (b) The orthogonal trajectories appear to be ellipses. This can be veriﬁed by integrating the diﬀerential equation derived in (a). 32 y(x) 4 2 –4 –2 0 2 x 4 –2 –4 Figure 38: Figure for Problem 33(b) b 34. If a > 0, then as illustrated in the following slope ﬁeld (a = 0.5, b = 1), it appears that limt→∞ i(t) = a . y(x) 4 2 –3 –2 –1 0 1 2 3 x –2 –4 Figure 39: Figure for Problem 34 when a > 0 If a < 0, then as illustrated in the following slope ﬁeld (a = −0.5, b = 1), it appears that i(t) diverges as t → ∞. i(t) 4 2 –3 –2 –1 0 1 2 3 t –2 –4 Figure 40: Figure for Problem 34 when a < 0 33 Finally, if a = 0 and b = 0, then once more i(t) diverges as t → ∞. The accompanying ﬁgure shows a representative case when b > 0. Here we see that limt→∞ i(t) = +∞. y(x) 4 2 –3 –2 –1 0 1 2 3 x –2 –4 Figure 41: Figure for Problem 34 when a = 0 If b < 0, then limt→∞ i(t) = −∞. If a = b = 0, then the general solution to the diﬀerential equation is i(t) = i0 where i0 is a constant. Solutions to Section 1.4 True-False Review: dy 1. TRUE. The diﬀerential equation dx = f (x)g (y ) can be written according to Deﬁnition 1.4.1, for a separable diﬀerential equation. 1 dy g (y ) dx = f (x), which is the proper form, 2. TRUE. A separable diﬀerential equation is a ﬁrst-order diﬀerential equation, so the general solution contains one constant. The value of that constant can be determined from an initial condition, as usual. 3. TRUE. Newton’s Law of Cooling is usually expressed as dT dt = −k (T − Tm ), and this can be rewritten as 1 dT = −k, T − Tm dt and this form shows that the equation is separable. 4. FALSE. The expression x2 + y 2 cannot be separated in the form f (x)g (y ), so the equation is not separable. 5. FALSE. The expression x sin(xy ) cannot be separated in the form f (x)g (y ), so the equation is not separable. dy 6. TRUE. We can write the given equation as e−y dx = ex , which is the proper form for a separable equation. dy 7. TRUE. We can write the given equation as (1 + y 2 ) dx = equation. x+4y 4x+y 8. FALSE. The expression 3 1 x2 , which is the proper form for a separable cannot be separated in the form f (x)g (y ), so the equation is not separable. 22 y 9. TRUE. We can write x x2+x y = xy , so we can write the given diﬀerential equation as +xy is the proper form for a separable equation. 1 dy y dx = x, which 34 Problems: 1. Separating the variables and integrating yields dy =2 y 2 xdx =⇒ ln |y | = x2 + c1 =⇒ y (x) = cex . 2. Separating the variables and integrating yields 1 1 dx =⇒ − = tan−1 x + c =⇒ y (x) = − . x2 + 1 y tan−1 x + c y −2 dy = 3. Separating the variables and integrating yields e−x dx =⇒ ey = −e−x + c =⇒ y (x) = ln (c − e−x ). ey dy = 4. Separating the variables and integrating yields (ln x)−1 dx =⇒ ln |y | = ln | ln x| + c =⇒ y (x) = c ln x. x dy = y 5. Separating the variables and integrating yields dx = x−2 dy y y =⇒ ln |y | = ln |x − 2| + c =⇒ ln = c =⇒ = c1 =⇒ y = c1 (x − 2). y x−2 x−2 6. Separating the variables and integrating yields dy = y−1 2x dx =⇒ ln |y − 1| = ln |x2 + 3| + c1 +3 y−1 =⇒ ln 2 =c x +3 y−1 = c1 =⇒ 2 x +3 =⇒ y − 1 = c1 (x2 + 3) =⇒ y (x) = c1 (x2 + 3) + 1. x2 dy 7. Regrouping the terms of the given equation, we have x(2x − 1) = (3 − y ). Separating the variables and dx integrating yields − dy = y−3 dx dx 2 =⇒ − ln |y − 3| = − + dx x(2x − 1) x 2x − 1 =⇒ − ln |y − 3| = − ln |x| + ln |2x − 1| + c1 x = c1 (y − 3)(2x − 1) x =⇒ = c2 (y − 3)(2x − 1) x =⇒ c2 (y − 3) = 2x − 1 c3 x c4 x − 3 =⇒ y (x) = + 3 =⇒ y (x) = . 2x − 1 2x − 1 =⇒ ln 35 8. Using the trigonometric identity cos (x − y ) = cos x cos y + sin x sin y , we can rewrite the diﬀerential dy cos (x − y ) cos x cos y sin y cos x equation as = −1 = . Separating the variables, we have dy = dx. dx sin x sin y sin x sin y cos y sin x Therefore, sin y cos x dy = dx =⇒ − ln | cos y | = ln | sin x| + c1 cos y sin x =⇒ ln | cos y | = − ln | sin x| + c2 =⇒ cos y = c3 csc x. 9. Separating the variables, we have 1 xdx dy = , (y + 1)(y − 1) 2 (x − 2)(x − 1) where we must assume that y = ±1. We have partial fractions decompositions 1 1 1 1 1 =− · +· (y + 1)(y − 1) 2 y+1 2 y−1 and 1 x 1 · = 2 (x − 2)(x − 1) 2 2 1 − x−2 x−1 . Integrating, we obtain − 1 2 dy 1 + y+1 2 dy 1 = y−1 2 2 dx − x−2 dx x−1 =⇒ − ln |y + 1| + ln |y − 1| = 2 ln |x − 2| − ln |x − 1| + c1 (x − 2)2 y−1 = ln + c1 y+1 x−1 (x − 2)2 y−1 =c . =⇒ y+1 x−1 =⇒ ln A short algebraic manipulation solves this expression for y : y (x) = (x − 1) + c(x − 2)2 . (x − 1) − c(x − 2)2 We explicitly check the values y = ±1, which we had to exclude above. By inspection, we see that y (x) = 1 and y (x) = −1 are indeed also solutions of the given diﬀerential equation. The former is included in the above solution when c = 0. 10. We have dy x2 y − 32 x2 y − 2x2 = +2= , which can be separated as dx 16 − x2 16 − x2 1 x2 16 = =− 1+ 2 y−2 16 − x2 x − 16 = −1 − 16 − 1 8 1 x+4 + 1 8 1 x−4 = −1 + 2 2 − . x+4 x−4 Integrating, we obtain ln |y − 2| = −x + 2 ln |x + 4| − 2 ln |x − 4| + c1 =⇒ y (x) = 2 + c x+4 x−4 11. Separating the variables, we have dy dx 1 = = y−c (x − a)(x − b) a−b 1 1 − x−a x−b dx. 2 e−x . 36 Integrating, we obtain ln |y − c| = x−a 1 1 + c. (ln |x − a| − ln |x − b|) + c = ln a−b a−b x−b Exponentiation of both sides and removal of absolute value gives y − c = c2 x−a x−b 1/(a−b) . Therefore, y (x) = c + c2 x−a x−b 1/(a−b) . dy dx =− . Integrating both sides, we have tan−1 y = 1 + y2 1 + x2 π π tan−1 x + c. Since y (0) = 1, we ﬁnd that c = . Thus, tan−1 y = tan−1 x + . A short manipulation can 4 4 1−x be used to rewrite this as y (x) = . 1+x 12. Separating the variables, we have 13. Separating the variables, we have 1 x = . Integration on each side gives a−y 1 − x2 1 − ln |a − y | = − ln |1 − x2 | + c1 =⇒ y (x) = a + c 1 − x2 . 2 √ The initial condition y (0) = 2a requires that c = a. Therefore, y (x) = a(1 + 1 − x2 ). 14. Using the trigonometric identity sin(x + y ) = sin x cos y + cos x sin y , we can rewrite the diﬀerential equation as dy sin (x + y ) =1− = − tan x cot y. dx sin y cos x Separating the variables, we have sin y sin x − dy = dx. cos y cos x Integrating on each side gives ln | cos y | = − ln | cos x| + c1 . That is, ln | cos x cos y | = c1 . π π 1 1 Since y ( ) = , we ﬁnd that c = ln 2 . Hence, ln | cos x cos y | = ln 2 , so that 4 4 y (x) = cos−1 15. Separation of variables gives gives − 1 sec x . 2 dy = sin xdx, where we must require y = 0 here. Integrating on each side y3 1 = − cos x + c. However, we cannot impose the initial condition y (0) = 0 on the last equation 2y 2 37 since it is not deﬁned at y = 0. But, by inspection, y (x) = 0 is a solution to the given diﬀerential equation, and further, y (0) = 0; thus, the unique solution to the initial value problem is y (x) = 0. dy 16. For y = 1, we can separate variables to obtain √ = 2 dx. Integration on each sides yields 3 y−1 √ 2 2 y − 1 = 2 x + c. The initial condition y (1) = 1 implies that c = − 3 . Therefore, 3 y−1= 1 1 (x − 1) =⇒ y (x) = 1 + (x − 1)2 . 3 9 This does not contradict the Existence-Uniqueness theorem because the hypothesis of the theorem is not satisﬁed when x = 1. 17.(a) Separating the variables v and t, we have preceding equation can be written as m dv = dt. If we let a = k [(mg/k ) − v 2 ] mg k then the m 1 dv = dt which can be integrated directly to obtain k a2 − v 2 m ln 2ak a+v a−v = t + c. Upon exponentiating both sides, 2ak a+v = c1 e m t . a−v Imposing the initial condition v (0) = 0, yields c1 = 1 so that 2ak a+v = e m t. a−v Now a short algebraic manipulation yields v (t) = a e 2akt m −1 e 2akt m +1 , which can be written in the equivalent form v (t) = a tanh gt . a (b) No. As t → ∞, v → a and as t → 0+ , v → 0. (c) We integrate the velocity function v (t) = y (t) = a tanh dy to obtain dt gt a2 gt dt =⇒ y (t) = ln(cosh ( )) + c1 . a g a If y (0) = 0 then y (t) = a2 gt ln cosh ( ) . g a dy x = − , y (0) = 1 . Separating 2 dx 4y the variables in the diﬀerential equation yields 4y −1 dy = −1dx, which can be integrated directly to obtain 18. The required curve is the solution curve to the initial-value problem 38 x2 + c. Imposing the initial condition we obtain c = 1 , so that the solution curve has the equation 2 2 1 2y 2 = −x2 + 2 , or equivalently, 4y 2 + 2x2 = 1. 2y 2 = − dy 19. The required curve is the solution curve to the initial-value problem = ex−y , y (3) = 1. Separating dx the variables in the diﬀerential equation yields ey dy = ex dx, which can be integrated directly to obtain ey = ex + c. Imposing the initial condition we obtain c = e − e3 , so that the solution curve has the equation ey = ex + e − e3 , or equivalently, y = ln(ex + e − e3 ). dy 20. The required curve is the solution curve to the initial-value problem = x2 y 2 , y (−1) = 1. Separating dx 1 the variables in the diﬀerential equation yields 2 dy = x2 dx, which can be integrated directly to obtain y 1 13 2 − = x + c. Imposing the initial condition we obtain c = − 3 , so that the solution curve has the equation y 3 3 1 y = − 1 3 2 , or equivalently, y = . 2 − x3 x −3 3 1 dv = −dt. Integrating we 1 + v2 −1 −1 obtain tan (v ) = −t + c. The initial condition v (0) = v0 implies that c = tan (v0 ), so that tan−1 (v ) = −t + tan−1 (v0 ). The object will come to rest if there is time, tr , at which the velocity is zero. To determine tr , we set v = 0 in the previous equation which yields tan−1 (0) = −tr + tan−1 (v0 ). Consequently, tr = dv tan−1 (v0 ). The object does not remain at rest since we see from the given diﬀerential equation that <0 dt at t = tr , and so v is decreasing with time. Consequently v passes through zero and becomes negative for t > tr . dv dx dv dv dv = · . Then = v . Substituting this result into the diﬀerential (b) From the chain rule we have dt dx dt dt dx dv v equation (1.4.17) yields v = −(1 + v 2 ). We now separate the variables: dv = −dx. Integrating, dx 1 + v2 2 we obtain ln (1 + v ) = −2x + c. Imposing the initial condition v (0) = v0 and x(0) = 0, we ﬁnd that 2 2 c = ln (1 + v0 ), so that ln (1 + v 2 ) = −2x + ln (1 + v0 ). When the object comes to rest the distance travelled 1 2 by the object is x = 2 ln (1 + v0 ). 21.(a) Separating the variables in the given diﬀerential equation yields 22.(a) Separating the variables, we have v −n dv = −kdt. Let us consider two cases: 1 Case 1: n = 1. Integrating both sides of v −n dv = −kdt yields v 1−n = −kt + c. Imposing the initial 1−n 1 condition v (0) = v0 yields c = v 1−n , so that 1−n 0 1 v = v0 −n + (n − 1)kt 1/(1−n) . Observe that the object comes to rest in a ﬁnite time if there is a positive value of t for which v = 0. This v 1−n 1 requires v0 −n + (n − 1)kt = 0. That is, t = − 0 . If we assume v0 > 0 and k > 0, t will be positive if (n − 1)k and only if n < 1. Case 2: n = 1. Integrating both sides of v −n dv = −kdt and imposing the initial conditions yields v (t) = v0 e−kt , and the object does not come to rest in a ﬁnite amount of time. 39 dx 1 (b) If n = 1, 2, then = [v0 −n + (n − 1)kt]1/(1−n) , where x(t) denotes the distanced travelled by the object. dt Consequently, 1 (2−n)/(1−n) v 1−n + (n − 1)kt + c. x(t) = − k (2 − n) 0 Imposing the initial condition x(0) = 0 yields c = x(t) = − 1 v 2−n , so that k (2 − n) 0 1 v 1−n + n(n − 1)kt k (2 − n) 0 (2−n)/(1−n) + 1 v 2−n . k (2 − n) 0 2−n 1 < 0, so that lim x(t) = . Hence the maximum distance that the t→∞ 1−n k (2 − n) 1 . object can travel in a ﬁnite time is less than k (2 − n) v0 If n = 1, then we can integrate to obtain x(t) = (1 − e−kt ), where we have imposed the initial condition k v0 x(0) = 0. Consequently, lim x(t) = . Thus in this case the maximum distance that the object can travel t→∞ k v0 in a ﬁnite time is less than . k 1 1 (2−n)/(1−n) 1 (c) If n > 2, then x(t) = − vo −n + n(n − 1)kt + v 2−n is still valid. However, k (2 − n) k (2 − n) 0 2−n in this case > 0, and so limt→∞ x(t) = +∞. Consequently, there is no limit to the distance that the 1−n object can travel. dx 1/(1−n) − 1 = (v0 1 + kt)−1 , which can If n = 2, then we return to v = v0 −n + (n − 1)kt . In this case dt 1 be integrated directly to obtain x(t) = ln (1 + v0 kt), where we have imposed the initial condition that k x(0) = 0. Once more we see that lim x(t) = +∞, so that there is no limit to the distance that the object t→∞ can travel. For 1 < n < 2, we have 23. Substituting the relation p = p0 ρ ρ0 1/γ into dp = −gρdy , we obtain dp = −gρ0 1/γ p p0 or equivalently, p−1/γ dp = − dy, gρ0 1/γ dy. p0 This can be integrated directly to obtain γp(γ −1)/γ gρ0 y = − 1/γ + c. γ−1 p 0 At the center of the Earth we have p = p0 . Imposing this initial condition on the preceding solution gives (γ −1)/γ γp0 c= . Substituting this value of c into the general solution to the diﬀerential equation we ﬁnd, γ−1 40 after some simpliﬁcation, (γ −1)/γ p(γ −1)/γ = p0 1− (γ − 1)ρ0 gy , γp0 so that p = p0 1 − (γ − 1)ρ0 gy γp0 γ /(γ −1) . 24. Substituting the given values into the diﬀerential equation for Newton’s Law of Cooling, we have dT dT = −k (T − 75) =⇒ = −kdt =⇒ ln |T − 75| = −kt + c1 =⇒ T (t) = 75 + ce−kt . dt T − 75 Now, T (0) = 135 implies that c = 60. Therefore, T (t) = 75 + 60e−kt . Next, T (1) = 95 implies that 95 = 75 + 60e−k , from which we quickly ﬁnd that k = ln 3. Thus, T (t) = 75 + 60e−t ln 3 . Now if T (t) = 615, then 615 = 75 + 60−t ln 3 . Therefore, t = −2 hours. Thus the object was placed in the room at 2 p.m. 25. Substituting the given values into the diﬀerential equation for Newton’s Law of Cooling, we have dT = −k (T − 450). Solving by the usual process, we arrive at T (t) = 450 + ce−kt . Since T (0) = 50, we dt 1 compute that c = −400. Therefore, T (t) = 450 − 400e−kt . Next, T (20) = 150 implies that k = ln 4 . 20 3 Hence, T (t) = 450 − 400( 3 )t/20 . 4 3 (i) After 40 minutes, the temperature is T (40) = 450 − 400( 4 )2 = 225◦ F. (ii) We must solve for t in the formula T (t) = 350 = 450 − 400( 3 )t/20 . We have 4 3 1 20 ln 4 ( )t/20 = =⇒ t = ≈ 96.4 minutes. 4 4 ln(4/3) dT 26. Substituting the given values into the diﬀerential equation for Newton’s Law of Cooling, we have = dt dT −k (T − 34), which separates to = −kdt. Solving by the usual process, we arrive at T (t) = 34 + ce−kt . T − 34 Since T (0) = 38 (by setting t = 0 at 2 p.m.), we compute that c = 4. Therefore, T (t) = 34 + 4e−kt . Next, T (1) = 36 implies that k = ln 2. Hence, T (t) = 34 + 4e−t ln 2 . Now the temperature at the time of death is T (t) = 98, and we wish to solve for t: T (t) = 34 + 4e−kt = 98 =⇒ 2−t = 16 =⇒ t = −4 hours. Thus T (−4) = 98 and Holmes was right, the time of death was 10 a.m. 27. We derive the solution to Newton’s Law of Cooling as usual: T (t) = 75 + ce−kt . Since T (10) = 415, we have 75 + ce−10k = 415 =⇒ 340 = ce−10k . Moreover, since T (20) = 347, we have 75 + ce−20k = 347 =⇒ 272 = ce−20k . Solving these two equations yields k= 5 1 ln 10 4 and c = 425. Hence, T (t) = 75 + 425 4 5 t/10 . 41 (a) The furnace temperature is T (0) = 500◦ F. (b) If T (t) = 100, then 100 = 75 + 425 4 5 t/10 =⇒ t = 10 ln 17 ≈ 126.96 minutes. ln 5 4 Thus the temperature of the coal was 100◦ F at 6:07 p.m. dT 28. Substituting the given values into the diﬀerential equation yields = −k (T − 72). With separation dt dT of variables, this becomes = −kdt. Solving this by the usual process, we get the general solution T − 72 dT 10 = −20, we have −k (T − 72) = −20. Therefore, k = 39 . Since T (1) = 150, we T (t) = 72 + ce−kt . Since dt have 150 = 72 + ce−10/39 =⇒ c = 78e10/39 . Consequently, T (t) = 72 + 78e10(1−t)/39 . (i) The initial temperature of the object is T (0) = 72 + 78e10/39 ≈ 173◦ F (ii) The rate of change of the temperature after 10 minutes is 72 + 78e−30/13 so after 10 minutes, dT when t = 10. Note that T (10) = dt dT 10 dT = −k (T − 72) = − (72 + 78e−30/13 − 72) =⇒ = −20e−30/13 ≈ −2◦ F per minute. dt 39 dt Solutions to Section 1.5 True-False Review: 1. TRUE. The diﬀerential equation for such a population growth is dP = kP , where P (t) is the population dt as a function of time, and this is the Malthusian growth model described at the beginning of this section. 2. FALSE. The initial population could be greater than the carrying capacity, although in this case the population will asymptotically decrease towards the value of the carrying capacity. 3. TRUE. The diﬀerential equation governing the logistic model is (1.5.2), which is certainly separable as dP D = r. P (C − P ) dt Likewise, the diﬀerential equation governing the Malthusian growth model is 1 as P dP = k . dt dP dt = kP , and this is separable 4. TRUE. As (1.5.3) shows, as t → ∞, the population does indeed tend to the carrying capacity C independently of the initial population P0 . As it does so, its rate of change dP slows to zero (this is best dt seen from (1.5.2) with P ≈ C ). 5. TRUE. Every ﬁve minutes, the population doubles (increase 2-fold). Over 30 minutes, this population will double a total of 6 times, for an overall 26 = 64-fold increase. 42 6. TRUE. An 8-fold increase would take 30 years, and a 16-fold increase would take 40 years. Therefore, a 10-fold increase would take between 30 and 40 years. 7. FALSE. The growth rate is constant. dP dt = kP , and so as P changes, dP dt changes. Therefore, it is not always 8. TRUE. From (1.5.2), the equilibrium solutions are P (t) = 0 and P (t) = C , where C is the carrying capacity of the population. 9. FALSE. If the initial population is in the interval ( C , C ), then although it is less than the carrying 2 capacity, its concavity does not change. To get a true statement, it should be stated instead that the initial population is less than half of the carrying capacity. 10. TRUE. Since P (t) = kP , then P (t) = kP (t) = k 2 P > 0 for all t. Therefore, the concavity is always positive, and does not change, regardless of the initial population. Problems: dP = kP via separation of variables to obtain P (t) = P0 ekt . Since P (0) = 10, then P (t) = 10ekt . dt ln 2 Since the doubling time is 3 hours, P (3) = 20, so that 2 = e3k =⇒ k = . Thus P (t) = 10e(t/3) ln 2 . 3 Therefore, P (24) = 10e(24/3) ln 2 = 10 · 28 = 2560 bacteria. 1. We solve dP = kP via separation of variables to obtain P (t) = P0 ekt . Since we are given P (10) = 5000 dt and P (12) = 6000, then 5000 = P0 e10k and 6000 = P0 e12k . Dividing the second equation by the ﬁrst, we 6 1 obtain e2k = 5 . Hence, k = 2 ln 6 . Hence, the initial population is 5 2. We solve P (0) = P0 = 5000e−10k = 5000e−5 ln(6/5) 5000 5 6 5 ≈ 2009.4. Also, the population doubles at time t such that P (t) = 2P0 . This occurs when t = ln 2 k = 2 ln 2 6 ≈ 7.6 hours. ln 5 3. From P (t) = P0 ekt and P (0) = 2000, it follows that P (t) = 2000ekt . Since the doubling time is 4 hours, it follows that k = ln 2 , and so P (t) = 2000et ln 2/4 . Therefore, the time t at which the culture contains 106 4 cells satisﬁes P (t) = 106 =⇒ 106 = 2000et ln 2/4 =⇒ t ≈ 35.86 hours. dP = kP via separation of variables to obtain P (t) = P0 ekt , where we measure time in years. dt Since, P (0) = 10000 then P (t) = 10000ekt . Since P (5) = 20000 then 20000 = 10000e5k =⇒ k = ln 2 . Hence 5 P (t) = 10000e(t ln 2)/5 . 4. We solve (a) We have P (20) = 10000e4 ln 2 = 160000. (b) We set 1000000 = 10000e(t ln 2)/5 . Then 100 = e(t ln 2)/5 =⇒ t = 5 ln 100 ≈ 33.22 years. ln 2 5. In formulas (1.5.5) and (1.5.6) we have P0 = 500, P1 = 800, P2 = 1000, t1 = 5, and t2 = 10. Hence, r= 1 (1000)(300) 1 ln = ln 3 5 (500)(200) 5 and C= 800[(800)(1500) − 2(500)(1000)] ≈ 1142.86, 8002 − (500)(1000) 43 so that (1142.86)(500) 571430 ≈ . 500 + 642.86e−0.2t ln 3 500 + 642.86e−0.2t ln 3 Inserting t = 15 into the preceding formula yields P (15) ≈ 1091. P (t) = 6. In formulas (1.5.5) and (1.5.6) we have P0 = 50, P1 = 62, P2 = 76, t1 = 2, and t2 = 4. Hence, r= 1 (76)(12) ln ≈ 0.132 2 (50)(14) and C= 62[(62)(126) − 2(50)(76)] ≈ 298.727, 622 − (50)(76) so that 14936.35 . 50 + 248.727e−0.132t Inserting t = 20 into the preceding formula yields P (20) ≈ 221. P (t) = P2 (P1 − P0 ) > 1. Rearranging the terms in this inequality and using P0 (P2 − P1 ) 2P0 P2 P1 (P0 + P2 ) − 2P0 P2 the fact that P2 > P1 yields P1 > . Further, C > 0 requires that > 0. 2 P0 + P2 P1 − P0 P2 2P0 P2 From P1 > we see that the numerator in the preceding inequality is positive, and therefore the P0 + P2 2P0 P2 2 , we must also have P1 > P0 P2 . denominator must also be positive. Hence, in addition to P1 > P0 + P2 7. (a)From (1.5.5), r > 0 requires 2 (b) We have P0 = 10000, P1 = 12000 and P2 = 18000. We immediately see that P1 > P0 P2 fails for these values, so by part (a), we conclude that there is no solution to the logistic equation that ﬁts this data. 8. Let y (t) denote the number of passengers who have the ﬂu at time t. Then we must solve dy = ky (1500 − y ), dt y (0) = 5, y (1) = 10, 1 dy = y (1500 − y ) 1 1 + dy = 1500y 1500(1500 − y ) where k is a positive constant. Separating the diﬀerential equation and integrating yields k dt. Using a partial fraction decomposition on the left-hand side gives k dt, so that 1 ln 1500 which upon exponentiation yields y 1500 − y = kt + c, y = c1 e1500kt . 1500 − y 1 y 1 1500kt Imposing the initial condition y (0) = 5, we ﬁnd that c1 = . Hence, = e . The further 299 1500 − y 299 10 1 1500k 1 299 y condition y (1) = 10 requires = e . Solving for k gives k = ln . Therefore, = 1490 299 1500 149 1500 − y 1 t ln (299/149) 1500et ln (299/149) 1500 e . Solving algebraically for y we ﬁnd y (t) = = . 299 299 + et ln (299/149) 1 + 299e−t ln (299/149) 1500 Hence, y (14) = ≈ 1474. −14 ln (299/149) 1 + 299e 44 9. (a) Equilibrium solutions: P (t) = 0, P (t) = T . Slope: P > T =⇒ dP dP > 0, 0 < P < T =⇒ < 0. dt dt Isoclines: r(P − T ) = k =⇒ P 2 − T P − k 1 = 0 =⇒ P = r 2 We see that slope of the solution curves satisﬁes k ≥ T± rT 2 + 4k . r −rT 2 . 4 dP d2 P = r(2P − T ) Concavity: We have = r2 (2P − T )(P − T )P . Hence, the solution curves are concave dt2 dt t T up for P > , and are concave down for 0 < P < . 2 2 (b) P(t) T t 0 Figure 42: Figure for Problem 9(b) (c) For 0 < P0 < T , the population dies out with time. For P0 > T , there is a population growth. The term threshold level is appropriate since T gives the minimum value of P0 above which there is a population growth. 1 dP = r, which can be written P (P − T ) dt 1 1 dP 1 P −T in the equivalent form − = r. Integrating yields ln = rt + c, so that T (P − T ) T P dt T P P −T P0 − T P −T P0 − T = c1 eT rt . The initial condition P (0) = P0 requires = c1 , so that = erT t . P P0 P P0 Solving algebraically for P yields T P0 P (t) = . P0 − (P0 − T )erT t 10. (a) Separating the variables in diﬀerential equation (1.5.7) gives T P0 is positive, and increases without bound as P0 − (P0 − T )erT t t → ∞. Consequently lim P (t) = 0. In this case the population dies out as t increases. (b) If P0 < T , then the denominator in t→∞ (c) If P0 > T , then the denominator of T P0 vanishes when (P0 − T )erT t = P0 (i.e., when P0 − (P0 − T )erT t 45 1 P0 ln ). This means that within a ﬁnite time the population grows without bound. We can rT P0 − T interpret this as a mathematical model of a population explosion. t= 11. Equilibrium solutions: P (t) = 0, P (t) = T, P (t) = C . The slope of the solution curves is negative for 0 < P < T , and for P > C . It is positive for T < P < C . Concavity: We have d2 P = r2 [(C − P )(P − T ) − (P − T )P + (C − P )P ](C − P )(P − T )P, dt2 which simpliﬁes to d2 P = r2 (−3P 2 + 2P T + 2CP − CT )(C − P )(P − T ). dt2 √ 1 Hence, changes in concavity occur when P = 3 (C + T ± C 2 − CT + T 2 ). A representative slope ﬁeld with some solution curves is shown in the accompanying ﬁgure. We see that for 0 < P0 < T the population dies out, whereas for T < P0 < C the population grows asymptotically to the equilibrium solution P (t) = C . If P0 > C , then the solution decays towards the equilibrium solution P (t) = C . P(t) C T t 0 Figure 43: Figure for Problem 11 12. Equilibrium solutions: P (t) = C . The slope of the solution curves is positive for 0 < P < C , and negative for P > C . Concavity: We have d2 P = r ln dt2 C P −1 dP = r2 ln dt C P − 1 P ln C . P C and P > C . They are concave down for Hence, the solution curves are concave up for 0 < P < e C < P < C . A representative slope ﬁeld with some solution curves are shown in the accompanying ﬁgure. e 46 25 20 15 P(t) 10 5 0 1 2 3 t 4 5 Figure 44: Figure for Problem 12 dP = r dt, which can be integrated directly to P (ln C − ln P ) obtain − ln (ln C − ln P ) = rt + c, where c is a constant of integration. Therefore, ln ( C ) = c1 e−rt . The P C C initial condition P (0) = P0 requires that ln ( P0 ) = c1 . Hence, ln ( C ) = e−rt ln ( P0 ). Exponentiation of each P side gives −rt e−rt e C C P0 = . =⇒ P (t) = C P P0 C 13. Separating the variables in (1.5.8) yields Since lim e−rt = 0, it follows that lim P (t) = C . t→∞ t→∞ dP = kP via separation of variables to obtain P (t) = P0 ekt . The initial condition P (0) = 400 dt requires that P0 = 400, so that P (t) = 400ekt . We also know that P (30) = 340. This requires that 1 17 340 = 400e30k so that k = ln . Consequently, 30 20 14. We solve P (t) = 400e 30 ln( 20 ) . t 17 (a) We have P (60) = 400e2 ln( 20 ) ≈ 289. Also, we have P (100) = 400e 3 10 17 ln( 17 ) 20 ≈ 233. (b) The half-life, tH , is determine from tH 200 = 400e 30 ln( 17 ) 20 =⇒ tH = 30 ln 2 ≈ 128 days. ln 20 17 15. (a) More. From the given information, 20000 fans left the stadium in the ﬁrst ten minutes. Because the rate of decay lessens with time under the exponential decay model, less than 20000 fans will leave in each of the next two ten minute intervals. Hence, less than 60000 fans will have departed after a total of 30 minutes. That is, more than 40000 fans will remain in the stadium after 30 minutes. dP = kP via separation of variables to obtain P (t) = P0 ekt . The initial condition P (0) = (b) We solve dt 100, 000 requires that P0 = 100, 000, so that P (t) = 100, 000ekt . We also know that P (10) = 80, 000. This 1 4 requires that 100, 000 = 80, 000e10k so that k = ln . Consequently, 10 5 P (t) = 100, 000e 10 ln( 5 ) . t 4 47 tH The half-life is determined from 50, 000 = 100, 000e 10 ln( 4 ) 5 ln 2 =⇒ tH = 10 ln 5 ≈ 31.06 min. (4) (c) There will be 15,000 fans left in the stadium at time t0 , where t0 15, 000 = 100, 000e 10 ln( 5 ) =⇒ t0 = 10 16. We solve 4 ln ln 3 20 4 5 ≈ 85.02 min. dP = kP via separation of variables to obtain P (t) = P0 ekt . Since the half-life is 5.2 years, dt ln 2 1 P0 = P0 e5.2k =⇒ k = − . 2 5.2 Therefore, ln 2 P (t) = P0 e−t 5.2 . Consequently, only 4% of the original amount will remain at time t0 where ln 2 4 ln 25 P0 = P0 e−t0 5.2 =⇒ t0 = 5.2 ≈ 24.15 years. 100 ln 2 17. Maple, or even a TI 92 plus, has no problem in solving these equations. 18. (a) The Malthusian model of population predicts that P (t) = 151.3ekt . Since P (1) = 179.4, then t ln (179.4/151.1) 1 179. 10 . 179.4 = 151.3e10k =⇒ k = 10 ln 151.4 . Hence, P (t) = 151.3e 3 151.3C . Imposing the initial conditions 151.3 + (C − 151.3)e−rt P (10) = 179.4 and P (20) = 203.3 gives the pair of equations (b) According to the logistic model, P (t) = 179.4 = 151.3e 10 ln (179.4/151.1) 10 and 203.3 = 151.3e 20 ln (179.4/151.1) 10 whose solution is C ≈ 263.95 and r ≈ 0.046. Using these values for C and r gives P (t) = 39935.6 . 151.3 + 112.65e−0.046t (c) P(t) 320 300 280 260 240 220 200 180 160 0 10 20 30 40 t Figure 45: Figure for Problem 18(c) Malthusian model: P (30) ≈ 253 million; P (40) ≈ 300 million. , 48 Logistic model: P (30) ≈ 222 million; P (40) ≈ 236 million. The logistics model ﬁts the data better than the Malthusian model, but still gives a signiﬁcant underestimate of the 1990 population. 50C , where C and r are to-be50 + (C − 50)e−rt determined. Imposing the conditions P (5) = 100 and P (15) = 250 gives the pair of equations 19. The logistic population model predicts that P (t) = 100 = 50C 50 + (C − 50)e−5r and 250 = 50C , 50 + (C − 50)e−15r whose positive solutions are C ≈ 370.32 and r ≈ 0.17. Using these values for C and r gives P (t) = 18500 . 50 + 18450e−0.17t From the ﬁgure we see that it will take approximately 52 years to reach 95% of the carrying capacity. P(t) Carrying Capacity 350 300 250 200 150 100 50 0 t 10 20 30 40 50 60 70 Figure 46: Figure for Problem 19 Solutions to Section 1.6 True-False Review: 1. FALSE. Any solution to the diﬀerential equation (1.6.7) serves as an integrating factor for the diﬀerential equation. There are inﬁnitely many solutions to (1.6.7), taking the form I (x) = c1 e arbitrary constant. p(x)dx , where c1 is an 2. TRUE. Any solution to the diﬀerential equation (1.6.7) serves as an integrating factor for the diﬀerential equation. There are inﬁnitely many solutions to (1.6.7), taking the form I (x) = c1 e p(x)dx , where c1 is an arbitrary constant. The most natural choice is c1 = 1, giving the integrating factor I (x) = e p(x)dx . 3. TRUE. Multiplying y + p(x)y = q (x) by I (x) yields y I + pIy = qI . Assuming that I = pI , the requirement on the integrating factor, we have y I + I y = qI , or by the product rule, (I · y ) = qI , as requested. 49 4. FALSE. Before determining an integrating factor, the equation must be rewritten as dy − x2 y = sin x, dx and with p(x) = −x2 , we have integrating factor I (x) = e −x2 dx , not e x2 dx . 5. TRUE. Rewriting the diﬀerential equation as 1 dy + y = x, dx x 1 we have p(x) = x , and so an integrating factor must have the form I (x) = e Since 5x does indeed have this form, it is an integrating factor. Problems: In this section the function I (x) = e of the form y + p(x)y = q (x). 1. An integrating factor is I (x) = e− p(x)dx dx p(x)dx =e dx x = eln x+c = ec x. will represent the integrating factor for a diﬀerential equation d(e−x y ) = ex =⇒ e−x y = ex + c =⇒ y (x) = dx = e−x . Therefore, ex (ex + c). 4 2. We rewrite the diﬀerential equation in standard form: y − x y = x5 sin x. An integrating factor is −4 d(x y ) I (x) = x−4 . Therefore, = x sin x =⇒ x−4 y = sin x − x cos x + c =⇒ y (x) = x4 (sin x − x cos x + c). dx 3. An integrating factor is I (x) = e2 2 xdx 2 2 d x2 (e y ) = 2ex x3 =⇒ ex y = 2 dx = ex . Therefore, 2 2 2 2 ex x3 dx =⇒ ex y = ex (x2 − 1) + c =⇒ y (x) = x2 − 1 + ce−x . 4. An integrating factor for the diﬀerential equation is I (x) = e 2x 1−x2 dx 2 = e− ln(1−x ) = 1 . 1 − x2 Therefore, d dx y 1 − x2 = 4x y =⇒ = − ln(1 − x2 )2 + c =⇒ y (x) = (1 − x2 )[− ln (1 − x2 )2 + c]. 1 − x2 1 − x2 5. An integrating factor for the diﬀerential equation is I (x) = e d 4 [(1+x2 )y ] = =⇒ (1+x2 )y = 4 dx 1 + x2 2x 1+x2 dx = 1 + x2 . Therefore, dx 1 =⇒ (1+x2 )y = 4 tan−1 x+c =⇒ y (x) = (4 tan−1 x+c). 1 + x2 1 + x2 sin 2x 6. We rewrite the diﬀerential equation in standard form: y + y = 2 cos2 x. An integrating factor for 2 cos2 x the diﬀerential equation is I (x) = e sin 2x dx 2 cos2 x =e sin x cos x dx cos2 x =e tan xdx = e− ln(cos x) = 1 . cos x 50 Therefore, d (y sec x) = 2 cos x =⇒ y (x) = cos x(2 sin x + c) =⇒ y (x) = sin 2x + c cos x. dx dx x ln x 7. An integrating factor is I (x) = e d (y ln x) = 9 dx = eln(ln x) = ln x. Therefore, x2 ln xdx =⇒ y ln x = 3x3 ln x − x3 + c =⇒ y (x) = 8. An integrating factor is I (x) = e − tan x dx d (y cos x) = 8 cos x sin3 x =⇒ y cos x = 8 dx 3x3 ln x − x3 + c . ln x = eln(cos x) = cos x. Therefore, cos x sin3 xdx =⇒ y cos x = 2 sin4 x + c =⇒ y (x) = 2 sin4 x + c . cos x 2 4et 9. We rewrite the diﬀerential equation in standard form: x + x = . An integrating factor is I (t) = t t dt e2 t = t2 . Therefore, d2 (t x) = 4tet =⇒ t2 x = 4 dt tet dt =⇒ t2 x = 4et (t − 1) + c =⇒ x(t) = 4et (t − 1) + c . t2 10. We rewrite the diﬀerential equation in standard form: y − (tan x)y = −2 sin x. An integrating factor is I (x) = e − tan xdx = eln(cos x) = cos x. Therefore, d 1 (y cos x) = −2 sin x cos x = − sin 2x =⇒ y cos x = cos 2x + c =⇒ y (x) = sec x dx 2 1 cos 2x + c . 2 11. We rewrite the diﬀerential equation in standard form: y + (tan x)y = sec x. An integrating factor is I (x) = e tan xdx = e− ln(cos x) = sec x. Therefore, d (y sec x) = sec2 x =⇒ y sec x = dx sec2 xdx = tan x+c =⇒ y (x) = cos x(tan x+c) =⇒ y (x) = sin x+c cos x. 12. An integrating factor is I (x) = e− 1 x dx = x−1 . Therefore, d −1 (x y ) = 2x ln x =⇒ x−1 y = 2 dx Hence, y (x) = x ln xdx = 12 x (2 ln x − 1) + c. 2 13 x (2 ln x − 1) + cx. 2 13. An integrating factor is I (x) = eα dx = eαx . Therefore, d αx (e y ) = e(α+β )x =⇒ eαx y = dx e(α+β )x dx. 51 Now we consider two cases: Case 1: α + β = 0. In this case, eαx y = x + c, so that y (x) = e−αx (x + c). Case 2: α + β = 0. In this case, eαx y = 14. An integrating factor is I (x) = e e(α+β )x eβx + c, so that y (x) = + ce−αx . α+β α+β mx−1 dx = em ln x = xm . Therefore, dm (x y ) = xm ln x =⇒ xm y = dx xm ln xdx. Now we consider two cases: Case 1: m = −1. In this case, xm y = (ln x)2 (ln x)2 + c =⇒ y (x) = x +c . 2 2 Case 2: m = −1. In this case, xm y = xm+1 x x c xm+1 ln x − + c =⇒ y (x) = ln x − + m. 2 2 m+1 (m + 1) m+1 (m + 1) x 15. An integrating factor is I (x) = e2 dx x = e2 ln x = x2 . Therefore, d2 (x y ) = 4x3 =⇒ x2 y = 4 dx and since y (1) = 2, we have c = 1. Therefore, y (x) = x3 dx + c =⇒ x2 y = x4 + c, x4 + 1 . x2 16. We begin by rewriting the diﬀerential equation y sin x − y cos x = sin 2x as y − y cot x = 2 cos x. Now, an integrating factor for this diﬀerential equation is I (x) = e − cot x dx = e− ln(sin x) = csc x. Therefore, d (y csc x) = 2 csc x cos x =⇒ y csc x = 2 ln (sin x) + c, dx and since y ( π ) = 2, we have c = 2. Therefore, y (x) = 2 sin x[ln (sin x) + 1]. 2 17. An integrating factor is I (t) = e2 dt 4−t = e−2 ln(4−t) = (4 − t)t−2 . Therefore, d ((4 − t)−2 x) = 5(4 − t)−2 =⇒ (4 − t)−2 x = 5 dt (4 − t)−2 dt + c =⇒ (4 − t)−2 x = 5(4 − t)−1 + c, and since x(0) = 4, we have c = −1. Therefore, x(t) = (4 − t)2 [5(4 − t)−1 − 1] = (4 − t)(1 + t). 18. First we rewrite the diﬀerential equation (y − e−x )dx + dy = 0 as y + y = ex . An integrating factor is I (x) = ex , so that dx e2x (e y ) = e2x =⇒ ex y = + c. dx 2 1 1 Since y (0) = 1, we have c = . Therefore, y (x) = (ex + e−x ) = cosh x. 2 2 52 dx 19. An integrating factor for this diﬀerential equation is I (x) = e = ex . Therefore, x dx (e y ) = ex f (x) =⇒ [ex y ]x = 0 dx ex f (x)dx 0 x x ex f (x)dx =⇒ e y − y (0) = 0 x =⇒ ex y − 3 = ex dx 0 x =⇒ y (x) = e−x 3 + ex f (x)dx . 0 x x ex dx = ex − 1 =⇒ y (x) = e−x (2 + ex ) ex f (x)dx = If x ≤ 1, 0 0 x x ex dx = e − 1 =⇒ y (x) = e−x (2 + e). ex f (x)dx = If x > 1, 0 0 20. An integrating factor for this diﬀerential equation is I (x) = e− 2dx = e−2x . Therefore, x d −2x (e y ) = e−2x f (x) =⇒ [e−2x y ]x = 0 dx e−2x f (x)dx 0 x =⇒ e−2x y − y (0) = e−2x f (x)dx 0 x =⇒ e−2x y − 1 = e−2x f (x) 0 x 2x =⇒ y (x) = e e−2x f (x)dx . 1+ 0 If x < 1, x x e−2x f (x)dx = 0 so that e−2x (1 − x)dx = 0 1 −2x e (2x − 1 + e2x ), 4 1 1 y (x) = e2x 1 + e−2x (2x − 1 + e2x ) = (5e2x + 2x − 1). 4 4 If x ≥ 1, x x e−2x f (x)dx = 0 e−2x (1 − x)dx = 0 1 1 1 (1 + e−2 ) =⇒ y (x) = e2x 1 + (1 + e−2 ) = e2x (5 + e−2 ). 4 4 4 21. On (−∞, 1), the diﬀerential equation is y − y = 1, which implies that an integrating factor is I (x) = e−x . Therefore, y (x) = c1 ex − 1. Imposing the initial condition y (0) = 0 requires c1 = 1, so that y (x) = ex − 1, for x < 1. On [1, ∞), the diﬀerential equation is y − y = 2 − x, which implies that an integrating factor is I (x) = e−x . Therefore, d −x (e y ) = (2 − x)e−x =⇒ y (x) = x − 1 + c2 e−x . dx 53 Continuity at x = 1 requires that lim y (x) = y (1). Consequently, we must choose c2 to satisfy c2 e = e − 1, x→1 so that c2 = 1 − e−1 . Hence, for x ≥ 1, we have y (x) = x − 1 + (1 − e−1 )ex . du dy du d2 y 1 . The ﬁrst equation becomes , so that = + u = 9x, which is ﬁrst-order linear. 2 dx dx dx dx x 1 An integrating factor for this is I (x) = e x dx = x, so 22. Let u = d (xu) = 9x2 =⇒ xu = 3x3 + c1 =⇒ u = 3x2 + c1 x−1 , dx but since u = dy , we conclude that dx dy = 3x2 + c1 x−1 =⇒ y = dx (3x2 + c1 x−1 )dx =⇒ y (x) = x3 + c1 ln x + c2 . 23. The diﬀerential equation for Newton’s Law of Cooling is dT = −k (T − Tm ). We can re-write this dt equation in the form of a ﬁrst-order linear diﬀerential equation: dT + kT = kTm . An integrating factor for dt d k dt kt kt this diﬀerential equation is I = e = e . Thus, (T e ) = kTm ekt . Integrating both sides, we get dt T ekt = Tm ekt + c, and hence, T = Tm + ce−kt , which is the solution to Newton’s Law of Cooling. dTm 24. We are given that = α. Therefore, Tm = αt + c1 , so that Newton’s Law of Cooling in this case dt reads dT dT = −k (T − αt − c1 ) =⇒ + kT = k (αt + c1 ). dt dt An integrating factor for this diﬀerential equation is I (t) = ek dt = ekt . Thus, d kt α (e T ) = kekt (αt + c1 ) =⇒ ekt T = ekt (αt − + c1 ) + c2 dt k α =⇒ T (t) = αt − + c1 + c2 e−kt . k dTm = 10. Therefore, Tm = 10t + c1 , for some constant c1 . Since Tm = 65 when dt t = 0 (at 8 a.m.), we ﬁnd that c1 = 65. Therefore, Tm = 10t + 65, and Newton’s Law of Cooling in this case reads dT dT = −k (T − Tm ) =⇒ = −k (T − 10t − 65). dt dt dT 1 Since (1) = 5, we have 5 = −k (35 − 10 − 65), so that we obtain k = . Now, the last diﬀerential equation dt 8 can then be written dT d + kT = k (10t + 65) =⇒ (ekt T ) = 5kekt (2t + 13). dt dt Therefore, 25. We are given that ekt T = 5kekt 2 2 13 t− 2 + k k k + c =⇒ T (t) = 5(2t − t 2 + 13) + ce−kt = 5(2t − 3) + ce− 8 . k 54 1 1 Since T (1) = 35, we have c = 40e 8 . Thus, T (t) = 10t − 15 + 40e 8 (1−t) . dT 1 26. (a) In this case, Newton’s law of cooling is = − (T − 80e−t/20 ). This linear diﬀerential equation dt 40 dT 1 has standard form + T = 2e−t/20 , with integrating factor I (t) = et/40 . Consequently the diﬀerential dt 40 d t/40 equation can be written in the integrable form (e T ) = 2e−t/40 , so that T (t) = −80e−t/20 + ce−t/40 . dt Then T (0) = 0 =⇒ c = 80, so that T (t) = 80(e−t/40 − e−t/20 ). (b) We see that lim T (t) = 0. This is a reasonable result since the temperature of the surrounding t→∞ medium also approaches zero as t → ∞. We would expect the temperature of the object to approach to the temperature of the surrounding medium at late times. (c) The maximum temperature must occur at a critical point of the function T (t) derived in part (a). To determine critical points, we set 0= 1 1 dT = 80 − e−t/40 + e−t/20 . dt 40 20 We ﬁnd only one critical value of t, namely t = 40 ln 2. Since T (0) = 0 and lim T (t) = 0, the function t→∞ assumes a maximum value at tmax = 40 ln 2. At this time, the temperature of the object is T (tmax ) = 80(e− ln 2 − e−2 ln 2 ) = 20◦ F, and the temperature of the surrounding medium is Tm (tmax ) = 80e−2 ln 2 = 20◦ F. (d) The behavior of T (t) and Tm (t) is given in the accompanying ﬁgure. T(t), Tm(t) 80 Tm(t) 60 40 T(t) 20 t 0 0 20 40 60 80 100 120 Figure 47: Figure for Problem 26(d) 27. (a) The temperature varies from a minimum of A − B at t = 0 to a maximum of A + B when t = 12. 55 T A+B A-B t 5 10 15 20 Figure 48: Figure for Problem 27(a) dT + k1 T = k1 (A − B cos ωt) + T0 . Multiplying dt reduces this diﬀerential equation to the integrable form (b) First write the diﬀerential equation in the linear form by the integrating factor I (t) = ek1 t d k1 t (e T ) = k1 ek1 t (A − B cos ωt) + T0 ek1 t . dt Consequently, ek1 t T (t) = Aek1 t − Bk1 so that T (t) = A + ek1 t cos ωtdt + T0 k1 t e + c, k1 Bk1 T0 −2 (k1 cos ωt + ω sin ωt) + ce−k1 t . k1 k1 + ω 2 This can be written in the equivalent form T (t) = A + T0 − k1 Bk1 2 k1 + ω 2 cos (ωt − α) + ce−k1 t for an appropriate phase constant α. 28. (a) We solve dy + p(x)y = 0 via separation of variables: dx dy = −p(x)dx =⇒ y dy =− y p(x)dx =⇒ ln |y | = − p(x)dx =⇒ yH = c1 e− p(x)dx . (b) Replace c1 in part (a) by u(x) and let v = e− p(x)dx . In order for y = uv to be a solution to dy dv du (1.6.15), we must have =u + v . Substituting this last result into the original diﬀerential equation, dx dx dx dy dv du dv + p(x)y = q (x), we obtain u +v + p(x)y = q (x). But since = −vp, the last equation reduces dx dx dx dx du du to v = q (x). That is, = v −1 (x)q (x). Integrating, we obtain u(x) = v −1 (x)q (x)dx. Substituting the dx dx values for u and v into y = uv , we obtain the general solution y = e− q (x) p(x)dx − e p(x)dx dx . 56 dy + x−1 y = 0, with solution yH = cx−1 (see Problem 28(a)). dx According to Problem 28 we determine the function u(x) such that y (x) = x−1 u(x) is a solution to the dy du dy given diﬀerential equation. We have = x−1 − x−2 u. Substituting into + x−1 y = cos x yields dx dx dx du du 1 x−1 − u + x−1 (x−1 u) = cos x, so that = x cos x. Integrating we obtain u = x sin x + cos x + c, so dx x2 dx −1 that y (x) = x (x sin x + cos x + c). 29. The associated homogeneous equation is dy + y = 0, with solution yH = ce−x (see Problem 28(a)). dx According to Problem 28 we determine the function u(x) such that y (x) = e−x u(x) is a solution to the given dy du −x dy du −x diﬀerential equation. We have = e − e−x u. Substituting into + y = e−2x yields e − e−x u + dx dx dx dx du = e−x . Integrating we obtain u = −e−x + c, so that y (x) = e−x (−e−x + c). e−x u(x) = e−2x , so that dx 30. The associated homogeneous equation is dy +cot x·y = 0, with solution yH = c·csc x (see Problem 28(a)). dx According to Problem 28 we determine the function u(x) such that y (x) = csc x · u(x) is a solution to the dy du dy given diﬀerential equation. We have = csc x · − csc x · cot x · u. Substituting into + cot x · y = 2 cos x dx dx dx du du yields csc x · − csc x · cot x · u + csc x · cot x · u = cos x, so that = 2 cos x sin x. Integrating we obtain dx dx 2 2 u = sin x + c, so that y (x) = csc x(sin x + c). 31. The associated homogeneous equation is 1 dy − y = 0, with solution yH = cx (see Problem 28(a)). We dx x determine the function u(x) such that y (x) = xu(x) is a solution of the given diﬀerential equation. We have dy du dy 1 du = x + u. Substituting into − y = x ln x and simplifying yields = ln x, so that u = x ln x − x + c. dx dx dx x dx Consequently, y (x) = x(x ln x − x + c). 32. The associated homogeneous equation is Problems 33-38 are easily solved using a diﬀerential equation solver such as the dsolve package in Maple. Solutions to Section 1.7 True-False Review: 1. TRUE. Concentration of chemical is deﬁned as the ratio of mass to volume; that is, c(t) = Therefore, A(t) = c(t)V (t). A(t) V (t) . 2. FALSE. The rate of change of volume is “rate in” − “rate out”, which is r1 − r2 , not r2 − r1 . 3. TRUE. This is reﬂected in the fact that c1 is always assumed to be a constant. 4. FALSE. The concentration of chemical leaving the tank is c2 (t) = can be nonconstant, c2 (t) can also be nonconstant. A(t) V (t) , and since both A(t) and V (t) 5. FALSE. Kirchhoﬀ’s second law states that the sum of the voltage drops around a closed circuit is zero, not that it is independent of time. 6. TRUE. This is essentially Ohm’s law, (1.7.10). 57 7. TRUE. Due to the negative exponential in the formula for the transient current, iT (t), it decays to zero as t → ∞. Meanwhile, the steady-state current, iS (t), oscillates with the same frequency ω as the alternating current, albeit with a phase shift. 8. TRUE. The amplitude is given in (1.7.19) as A = gets smaller. √ E0 , R2 +ω 2 L2 and so as ω gets larger, the amplitude A Problems: 1. We have ∆V = r1 ∆t − r2 ∆t = ∆t. Therefore, dV = 1 =⇒ V (t) = t + 10, dt where we have used the initial condition V (0) = 10. Now, ∆A ≈ c1 r1 ∆t − c2 r2 ∆t, so dA A A dA 1 = 8 − c2 = 8 − =8− =⇒ + A = 8. dt V t + 10 dt t + 10 1 This diﬀerential equation is ﬁrst-order linear, with integrating factor I (t) = e t+10 dt t + 10. Therefore, (t + 10)A = 4(t + 10)2 + c1 . Since A(0) = 20, we have c1 = −200. We conclude that A(t) = 4 [(t + 10)2 − 50]. t + 10 Therefore, after 40 minutes, we have A(40) = 196 g. 2. We need to ﬁnd A(60) , where t is measured in minutes. We have ∆V = r1 ∆t − r2 ∆t = 3∆t. Therefore, V (60) dV = 3 =⇒ V (t) = 3(t + 200), dt where we have used V (0) = 600. Now, ∆A ≈ c1 r1 ∆t − c2 r2 ∆t, so A A dA = 30 − 3c2 = 30 − 3 = 30 − . dt V t + 200 1 This diﬀerential equation is ﬁrst-order linear, with integrating factor I (t) = e t+200 dt = t + 200. Hence, (t + 200)A = 15(t + 200)2 + c. Since A(0) = 1500, we have c = −300000. We conclude that A(t) = Thus 15 [(t + 200)2 − 20000]. t + 200 A(60) 596 = g/L. V (60) 169 3. We have ∆V = r1 ∆t − r2 ∆t = 2∆t. Therefore, dV = 2 =⇒ V (t) = 2(t + 10), dt 58 where we have used V (0) = 20. Thus V (t) = 40 for t = 10, so we must ﬁnd A(10). Now, ∆A ≈ c1 r1 ∆t − c2 r2 ∆t, so 2A A dA 1 dA = 40 − 2c2 = 40 − = 40 − =⇒ + A = 40 dt V t + 10 dt t + 10 d =⇒ [(t + 10)A] = 40(t + 10)dt dt =⇒ (t + 10)A = 20(t + 10)2 + c. Since A(0) = 0, we have c = −2000. Thus, we conclude that A(t) = 20 [(t + 10)2 − 100] t + 10 and A(10) = 300 g. 4. We have ∆V = r1 ∆t − r2 ∆t = 2∆t. Therefore, dV = 2 =⇒ V (t) = 2(t + 50), dt where we have used V (0) = 100. Now, ∆A ≈ c1 r1 ∆t − c2 r2 ∆t, so dA 4A 2A = 3 − 4c2 = 3 − =3− . dt V t + 50 Hence, dA 4A dA 2A + = 3 =⇒ + =3 dt 2(t + 50) dt t + 50 d =⇒ [(t + 50)2 A] = 3(t + 50)2 dt =⇒ (t + 50)2 A = (t + 50)3 + c. Since A(0) = 100, we have c = 125000, and therefore, A(t) = t + 50 + 125000 . (t + 50)2 The tank is full when V (t) = 200, that is, when 2(t + 50) = 200 so that t = 50 min. Therefore the A(50) 9 concentration just before the tank overﬂows is: = g/L. V (50) 16 5. (a) We have ∆V = r1 ∆t − r2 ∆t. Therefore, dV = 1 =⇒ V (t) = t + 10, dt where we have used V (0) = 10. Now, ∆A ≈ c1 r1 ∆t − c2 r2 ∆t, so that dA A 2A = −2c2 = −2 = − . dt V t + 10 We solve this by separation of variables: dA 2dt =− =⇒ ln |A| = −2 ln |t + 10| + c =⇒ A(t) = k (t + 10)−2 , A t + 10 59 where k is a constant. Since V (5) = 15, we have A(5) = 3 since A(t) = A(5) = 0.2. Thus, k = 675 and V (5) 675 . In particular, A(0) = 6.75 g. (t + 10)2 (b) From part (a), A(t) = 675 and V (t) = t + 10. Therefore, (t + 10)2 A(t) 675 . = V (t) (t + 10)3 Setting √ √ A(t) = 0.1, we have (t + 10)3 = 6750 =⇒ t + 10 = 15 3 2 so V (t) = t + 10 = 15 3 2 L. V (t) 6. (a) We have ∆V = r1 ∆t − r2 ∆t = ∆t. Therefore, dV = 1 =⇒ V = t + 20, dt where we have used V (0) = 20. Now, ∆A ≈ c1 r1 ∆t − c2 r2 ∆t, so that dA 2A 2A = 3 − 2c2 = 3 − =3− . dt V t + 20 Hence, 2 d (20 + t)3 + c dA + A = 3 =⇒ [(t + 20)2 A] = 3(t + 20)2 =⇒ A(t) = . dt t + 20 dt (t + 20)2 Since A(0) = 0, we have c = −203 which means that A(t) = (t + 20)3 − 203 . (t + 20)2 A(t) A(t) or c2 = so from part (a), V (t) t + 20 √ (t + 20)3 − 203 1 1 (t + 20)3 − 203 c2 = . Therefore c2 = g/L when = , and we conclude that t = 20( 3 2 − 1) (t + 20)3 2 2 (t + 20)3 minutes. (b) The concentration of chemical in the tank, c2 , is given by c2 = 7. (a) We have ∆V = r1 ∆r − r2 ∆t = 0, since r1 = r2 = r. Therefore, dV = 0 =⇒ V (t) = V (0) = w dt for all t. Now, ∆A = c1 r1 ∆t − c2 r2 ∆t, so that dA A r = kr − r = kr − A, dt V w so we have the ﬁrst-order linear diﬀerential equation dA r d + A = kr =⇒ (ert/w A) = krert/w =⇒ A(t) = kw + ce−rt/w . dt w dt 60 Since A(0) = A0 , we have c = A0 − kw. Therefore, A(t) = e−rt/w [kw(ert/w − 1) + A0 ]. (b) We have e−rt/w A(t) k w(ert/w − 1) + A0 = lim k + = lim t→∞ t→∞ t→∞ V (t) w lim A0 − k e−rt/w = k. w This is reasonable since the volume remains constant, and the solution in the tank is gradually mixed with and replaced by the solution of concentration k ﬂowing in. 8. (a) For the top tank we have: dA1 A1 dA1 r2 dA1 = c1 r1 − c2 r2 =⇒ = c1 r1 − r2 =⇒ = c1 r1 − A1 (t). dt dt V1 dt (r1 − r2 )t + V1 Rearranging this equation, we obtain dA1 r2 + A1 = c1 r1 . dt (r1 − r2 )t + V1 For the bottom tank we have: dA2 A1 A2 (t) dA2 = c2 r2 − c3 r3 =⇒ = r2 − r3 dt dt (r1 − r2 )t + V1 V2 (t) dA2 A1 A2 (t) =⇒ = r2 − r3 dt (r1 − r2 )t + V1 (r2 − r3 )t + V2 r3 dA2 r2 A1 + =⇒ A2 = . dt (r1 − r2 )t + V2 (r1 − r2 )t + V1 (b) From part (a), r2 4 dA1 dA1 + + A1 = 3 A1 = c1 r1 =⇒ dt (r1 − r2 )t + V1 dt 2t + 40 dA1 2 =⇒ + A1 = 3 dt t + 20 d =⇒ [(t + 20)2 A1 ] = 3(t + 20)2 dt c =⇒ A1 = t + 20 + . (t + 20)2 Since A1 (0) = 4, we have c = −6400. Consequently, A1 (t) = t + 20 − 6400 . (t + 20)2 Next, dA2 3 2 6400 dA2 3 2[(t + 20)3 − 6400] + A2 = t + 20 − =⇒ + A2 = dt t + 20 t + 20 (t + 20)2 dt t + 20 (t + 20)3 d 2[(t + 20)3 − 6400] =⇒ [(t + 20)3 A2 ] = (t + 20)3 dt (t + 20)3 t + 20 12800t k =⇒ A2 (t) = − + . 2 (t + 20)3 (t + 20)3 61 Since A2 (0) = 20, we ﬁnd that k = 80000. Thus A2 (t) = and in particular, A2 (10) = t + 20 80000 12800t + , − 2 (t + 20)3 (t + 20)3 119 ≈ 13.2 g. 9 R 1 di + i = E (t) yields the ﬁrst-order linear 9. Plugging the given values into the diﬀerential equation dt L L di diﬀerential equation + 40i = 200, which has integrating factor I (t) = e40t . Therefore, dt d 40t (e i) = 200e40t =⇒ i(t) = 5 + ce−40t . dt Since i(0) = 0, we have c = −5. Consequently, i(t) = 5(1 − e−40t ). 1 E dq + q= yields the ﬁrst-order linear 10. Plugging the given values into the diﬀerential equation dt RC R dq diﬀerential equation + 10q = 20, which has integrating factor I (t) = e10t . Therefore, dt d (qe10t ) = 20e10t =⇒ q (t) = 2 + ce−10t . dt Since q (0) = 0, we have c = −2. Consequently, q (t) = 2(1 − e−40t ). di R 1 11. Plugging the given values into the diﬀerential equation + i = E (t) yields the ﬁrst-order linear dt L L di diﬀerential equation + 3i = 15 sin 4t, which has integrating factor I (t) = e3t . Therefore, dt d 3t 3e3t (e i) = 15e3t sin 4t =⇒ e3t i = (3 sin 4t − 4 cos 4t) + c =⇒ i(t) = 3 dt 5 Since i(0) = 0, we ﬁnd that c = 3 4 sin 4t − cos 4t + ce−3t . 5 5 3 12 . Therefore, i(t) = (3 sin 4t − 4 cos 4t + 4e−3t ). 5 5 dq 1 E 12. Plugging the given values into the diﬀerential equation + q= yields the ﬁrst-order linear dt RC R dq diﬀerential equation + 4q = 5 cos 3t, which has integrating factor I (t) = e4t . Therefore, dt d 4t e4t 1 (e q ) = 5e4t cos 3t =⇒ e4t q = (4 cos 3t + 3 sin 3t) + c =⇒ q (t) = (4 cos 3t + 3 sin 3t) + ce−4t . dt 5 5 1 Since q (0) = 1, we ﬁnd that c = . Thus, 5 1 dq 1 q (t) = (4 cos 3t + 3 sin 3t + e−4t ) and i(t) = = (9 cos 3t − 12 sin 3t − 4e−4t ). 5 dt 5 13. In an RC circuit for t > 0 the diﬀerential equation is given by dq 1 E + q = . If E (t) = 0 then dt RC R dq 1 d + q = 0 =⇒ (et/RC q ) = 0 =⇒ q = cet/RC , dt RC dt 62 and if q (0) = 5, then q (t) = 5e−t/RC . Then lim q (t) = 0. Yes, this is reasonable. As the time increases and t→∞ E (t) = 0, the charge will dissipate to zero. 1 E0 14. The diﬀerential equation governing the charge q (t) in this case is given by i + q= . Diﬀerentiating RC R 1 dq dq di this equation with respect to t, we obtain + = 0. Since = i, dt RC dt dt 1 d di + i = 0 =⇒ (et/RC i) = 0 =⇒ i(t) = ce−t/RC . dt RC dt Since q (0) = 0, we see that i(0) = E0 . Therefore, R i(t) = E0 −t/RC e . R E0 Integrating, we ﬁnd that q (t) = −RCe−t/RC + k , for some constant k . Using q (0) = 0, we obtain R E0 (k − RC ), so that k = RC . Hence, 0= R q (t) = E0 C 1 − e−t/RC . Then lim q (t) = E0 C , and lim i(t) = 0. t→∞ t→∞ q(t) E0k t Figure 49: Figure for Problem 14 di R E (t) di R 15. The diﬀerential equation governing the current in an RL circuit, + i= , becomes + i= dt L L dt L E0 sin ωt, since E (t) = E0 sin ωt. An integrating factor here is I (t) = eRt/L , so that L d Rt/L E0 Rt/L E0 (e i) = e sin ωt =⇒ i(t) = 2 [R sin ωt − ωL cos ωt] + Ae−Rt/L . dt L R + L2 ω 2 63 We can write this as i(t) = √ R2 R ωL E0 √ sin ωt − √ cos ωt + Ae−Rt/L . 2 ω2 2 + L2 ω 2 2 + L2 ω 2 +L R R Deﬁning the phase φ by cos φ = √ R2 R + L2 ω 2 sin φ = √ and R2 ωL , + L2 ω 2 we have i(t) = √ R2 E0 [cos φ sin ωt − sin φ cos ωt] + Ae−Rt/L . + L2 ω 2 That is, i(t) = √ R2 E0 sin (ωt − φ) + Ae−Rt/L . + L2 ω 2 Transient part of the solution: iT (t) = Ae−Rt/L . Steady state part of the solution: iS (t) = √ E0 sin (ωt − φ). R2 + L2 ω 2 di E0 R + ai = , i(0) = 0, where a = , and E0 denotes the dt L L constant EMF. An integrating factor for the diﬀerential equation is I = eat , so that the diﬀerential equation E0 at E0 d at (e i) = e . Integrating yields i(t) = + c1 e−at . The given initial can be written in the form dt L aL E0 E0 E0 E0 condition requires c1 + = 0, so that c1 = − . Hence i(t) = (1 − e−at ) = (1 − e−at ). aL aL aL R 16. We must solve the initial value problem dq 1 E (t) dq 1 17. Substituting E (t) = E0 e−at into the diﬀerential equation + q= , we obtain + q= dt RC R dt RC E0 −at e , with integrating factor I (t) = et/RC . Thus, R E0 (1/RC −a)t E0 C E0 C d t/RC (e q) = e =⇒ q (t) = e−t/RC e(1/RC −a)t + k =⇒ q (t) = e−at +ke−t/RC . dt R 1 − aRC 1 − aRC Imposing the initial condition q (0) = 0 (capacitor initially uncharged) requires k = − q (t) = E0 C (e−at − e−t/RC ) 1 − aRC and i(t) = dq E0 C = dt 1 − aRC 18. From the given second-order diﬀerential equation, we use i(t) = i 1 qdq LC =⇒ i2 = − 1 −t/RC e − ae−at . RC dq di di dq di and = · =i to obtain dt dt dq dt dq 1 di + q = 0. Then dq LC idi = − E0 C , so that 1 − aRC 12 q + k, LC 64 for some constant k . Since q (0) = q0 and i(0) = 0, we have k = i2 = − 2 q0 . Thus, LC 12 q2 q + 0 =⇒ i = ± LC LC 2 q0 − q 2 √ LC 2 q0 − q 2 dq =⇒ =± √ dt LC dt dq = ±√ =⇒ 2 LC q0 − q 2 q t =⇒ sin−1 ( ) = ± √ + k1 q0 LC t + k1 . =⇒ q = q0 sin ± √ LC Since q (0) = q0 , we have q0 = q0 sin k1 , so that k1 = q (t) = q0 sin ± √ and i(t) = π 2 + 2nπ , where n is an integer. Hence, t π + 2 LC = q0 cos dq q0 = −√ sin dt LC √ 19. In this case, we have the second-order diﬀerential equation √ t LC t . LC d2 q 1 E0 dq + q= . Since i = , then 2 dt LC L dt di dq di d2 q di = =i . = dt2 dt dq dt dq Hence the original diﬀerential equation can be written as i di 1 E0 + q= dq LC L =⇒ idi + 1 E0 qdq = dq LC L =⇒ i2 q2 E0 q + = + A, 2 2LC L q2 E0 q 0 i2 q2 E0 q for some constant A. Since i(0) = 0 and q (0) = q0 , we ﬁnd that A = 0 − . From + = +A 2LC L 2 2LC L we get that 1/2 1/2 2E0 q q2 (2E0 C )2 (q − E0 C )2 i = 2A + − =⇒ i = 2A + − . L LC LC LC We let D2 = 2A + (E0 C )2 , so that LC i dq =D 1− dt Then since q (0) = 0, we have B = q − E0 C √ D LC √ LC sin−1 2 1 /2 =⇒ √ q − E0 C √ D LC q − E0 C √ = sin D LC LC sin−1 q − E0 C √ D LC and therefore t+B √ , LC = t + B. 65 which implies that √ q (t) = D LC sin Since D2 = ± t+B √ LC + E0 c =⇒ i = dq = D cos dt t+B √ . LC 2 2A + (E0 C )2 q0 E0 q 0 and A = − , we can substitute to eliminate A and obtain D = LC 2LC L |q0 − E0 C | √ . Thus LC q (t) = ±|q0 − E0 C | sin t+B √ LC + E0 c. Solutions to Section 1.8 True-False Review: 1. TRUE. We have f (tx, ty ) = 2(xt)(yt) − (xt)2 2xyt2 − x2 t2 2xy − x2 = = = f (x, y ), 2 2 + y 2 t2 2(xt)(yt) + (yt) 2xyt 2xy + y 2 so f is homogeneous of degree zero. 2. FALSE. We have f (tx, ty ) = (yt)2 y 2 t2 y2 t y2 = = = , (xt) + (yt)2 xt + y 2 t2 x + y2 t x + y2 so f is not homogeneous of degree zero. 3. FALSE. Setting f (x, y ) = 1+xy 2 1+x2 y , we have f (tx, ty ) = 1 + (xt)(yt)2 1 + xy 2 t3 = = f (x, y ), 1 + (xt)2 (yt) 1 + x2 yt3 so f is not homogeneous of degree zero. Therefore, the diﬀerential equation is not homogeneous. 4. TRUE. Setting f (x, y ) = x2 y 2 x4 +y 4 , f (tx, ty ) = we have (xt)2 (yt)2 x2 y 2 t4 x2 y 2 = 44 =4 = f (x, y ). (xt)4 + (yt)4 x t + y 4 t4 x + y4 Therefore, f is homogeneous of degree zero, and therefore, the diﬀerential equation is homogeneous. 5. TRUE. This is veriﬁed in the calculation leading to Theorem 1.8.5. 6. TRUE. This is veriﬁed in the calculation leading to (1.8.12). 7. TRUE. We can rewrite the equation as y− √ xy = √ xy 1/2 , √ √ which is the proper form for a Bernoulli equation, with p(x) = − x, q (x) = x, and n = 1/2. 66 8. FALSE. The presence of an exponential exy involving y prohibits this equation from having the proper form for a Bernoulli equation. 9. TRUE. This is a Bernoulli equation with p(x) = x, q (x) = x2 , and n = 2/3. Unless otherwise indicated in this section V = y dy dV , =V +x , and t > 0. x dx dx Problems: 1. We have f (tx, ty ) = (tx)2 − (ty )2 x2 − y 2 = = f (x, y ). Thus f is homogeneous of degree zero. We write (tx)(ty ) xy f (x, y ) = y 1 − ( x )2 1−V2 x2 − y 2 = = = F (V ). y xy V x 2. We have f (tx, ty ) = (tx) − (ty ) = t(x − y ) = tf (x, y ). Thus f is homogeneous of degree one. 3. We have f (tx, ty ) = ty (tx) sin ( tx ) − (ty ) cos ( tx ) ty y x degree zero. We write f (x, y ) = y x sin x − y y x = y cos x y x sin x − y cos x y = y 1 sin V − V cos V = F (V ). V (tx)2 + (ty )2 |t| x2 + y 2 = = tx − ty t(x − y ) 4. We have f (tx, ty ) = degree zero. We write f (x, y ) = = f (x, y ). Thus f is homogeneous of y 1 + ( x )2 = y 1− x x2 + y 2 = f (x, y ). Thus f is homogeneous of x−y √ 1+V2 = F (V ). 1−V 5. We have f (tx, ty ) = ty . Thus f is not homogeneous. tx − 1 6. We have f (tx, ty ) = tx − 3 5(ty ) + 9 3(tx) + 5(ty ) 3x + 5y + = = = f (x, y ). Thus f is homogeneous of ty 3(ty ) 3(ty ) 3y degree zero. We write f (x, y ) = 7. We have f (tx, ty ) = write f (x, y ) = 8. We have f (tx, ty ) = y 3 + 5x 3x + 5y 3 + 5V = = = F (V ). y 3y 3x 3V (tx)2 + (ty )2 = tx x2 + y 2 = f (x, y ). Thus f is homogeneous of degree zero. We x y |x| 1 + ( x )2 x2 + y 2 y = =− 1+ x x x (tx)2 + 4(ty )2 − (tx) + (ty ) = (tx) + 3(ty ) 2 = − 1 + V 2 = F (V ). x2 + 4y 2 − x + y = f (x, y ). Thus f is homogex + 3y 67 neous of degree zero. We write x2 + 4y 2 − x + y = x + 3y f (x, y ) = y 1 + 4( x )2 − 1 + y 1 + 3x y x √ = 1 + 4V 2 − 1 + V = F (V ). 1 + 3V 2y dy 3y ) = . x dx x d dV dV dy with (xV ) = V + x , we obtain (3 − 2V ) V + x = 3V . Substituting V = y/x and replacing dx dx dx dx dV 3V Thus, x = − V . A short algebraic manipulation allows us to separate variables as follows: dx 3 − 2V dx 3 3 − 2V dV = . Integrating on each side, we obtain − − ln |V | = ln |x| + c1 . Finally, replacing V with 2V 2 x 2V y/x, we conclude that 9. We divide both sides of the given diﬀerential equation through by x to obtain (3 − − y 3x 3x − ln = ln |x| + c1 =⇒ ln y = − + c2 =⇒ y 2 = ce−3x/y . 2y x 2y 10. We divide the given diﬀerential equation through by x2 on the top and bottom of the right-hand dy 1 dy d dV y2 side to obtain = . Substituting V = y/x and replacing with (xV ) = V + x , we 1+ dx 2 x dx dx dx dV 1 obtain V + x = (1 + V )2 . Using a short algebraic manipulation to separate the variables, we ﬁnd that dx 2 1 1 1 = dx. Integrating on each side, we obtain tan−1 V = ln |x| + c. Finally, replacing V with y/x, V2+1 2x 2 we conclude that y 1 tan−1 = ln |x| + c. x 2 y dy − = dx x y dy d dV dV cos . Substituting V = y/x and replacing with (xV ) = V +x , we obtain sin V V + x −V = x dx dx dx dx dV dx cos V . Therefore, sin V x = cos V . Separating variables, we ﬁnd that tan V dV = . Integratdx x ing on each side, we obtain − ln | cos V | = ln |x| + c1 , or, upon replacing V with y/x and rearranging, y = −c1 . Exponentiation and removal of absolute value yields ln |x| · cos x 11. We divide both sides of the given diﬀerential equation through x to obtain sin x cos y x c y = c2 =⇒ y (x) = x cos−1 . x x 12. We divide both sides of the given diﬀerential equation through by x to obtain dy = dx Substituting V = y/x and replacing 16x2 − y 2 + y dy =⇒ = x dx y y 16 − ( )2 + . x x √ dy d dV dV with (xV ) = V + x , we obtain V + x = 16 − V 2 + V , dx dx dx dx 68 √ dV dV dx = 16 − V 2 . Separating variables, we have √ = . Integrating on each side, we obtain 2 dx x 16 − V V y sin−1 ( ) = ln |x| + c. Finally, replacing V with y/x, we conclude that sin−1 ( ) = ln |x| + c. 4 4x or x 13. We ﬁrst rewrite the given diﬀerential equation in the equivalent form y = (9x2 + y 2 ) + y . Factoring x y |x| 9 + ( x )2 + y . Since we are told to solve the diﬀerential x y y equation on the interval x > 0 we have |x| = x, so that y = 9 + ( x )2 + x , which we recognize as being homogeneous. We therefore let y = xV , so that y = xV√+ V . Substitution into the preceding diﬀerential √ equation yields xV + V = 9 + V 2 + V . That is, xV = 9 + V 2 . Separating the variables in this equation √ 1 1 we obtain √ dV = dx. Integrating we obtain ln (V + 9 + V 2 ) = ln x + c. Exponentiating both x 9+V2 √ y 2 = c x. Substituting sides yields V + 9 + V = V and multiplying through by x yields the general 1 x 2 + y 2 = c x2 . solution y + 9x 1 out an x2 from the square root yields y = 14. The given diﬀerential equation can be written in the equivalent form y (x2 − y 2 ) dy = , dx x(x2 + y 2 ) which we recognize as being ﬁrst-order homogeneous. The substitution y = xV yields V +x so that dV V (1 − V 2 ) dV 2V 3 = =⇒ x =− , 2 dx 1+V dx 1+V2 1+V2 dv = −2 V3 V −2 dx =⇒ − + ln |V | = −2 ln |x| + c1 . x 2 Consequently, − x2 + ln |xy | = c1 . 2y 2 dy y y 15. We rearrange the given diﬀerential equation to read = ln . Substituting V = y/x and dx x x d dV dV dy with (xV ) = V + x , we obtain V + x = V ln V . Separating the variables, we have replacing dx dx dx dx dV dx ln V − 1 = . Integrating on each side, we have ln | ln V − 1| = ln |x| + c1 . That is, ln = c1 . V (ln V − 1) x x Exponentiation on both sides gives ln V − 1 = c2 x, or ln V = 1 + c2 x. Exponentiating once more gives V = e1+c2 x . Therefore, y (x) = xe1+c2 x . 16. Substituting V = y/x and replacing V +x dy d dV with (xV ) = V + x , we obtain dx dx dx dV V 2 + 2V − 2 −V 3 + 2V 2 + V − 2 V2−V +1 dx dV = =⇒ x = =⇒ 3 dV = − . 2 2−V +1 2−V +2 dx 1−V +V dx V V − 2V x We can use a partial fractions decomposition on the left-hand side: V2−V +1 V2−V +1 1 1 1 dv = dv = − + dV. 2−V +2 V − 2V (V − 1)(V + 2)(V + 1) V − 2 2(V − 1) 2(V + 1) 3 69 Therefore, 1 1 1 dx − + dV = − . V − 2 2(V − 1) 2(V + 1) x Integrating on both sides, we have (V − 2)2 (V + 1) (y − 2x)2 (y + x) 1 1 = −2 ln x+c2 =⇒ = c. ln |V − 2|− ln |V − 1|+ ln |V + 1| = − ln |x|+c1 =⇒ ln 2 2 V −1 y−x 17. Dividing the given diﬀerential equation through on both sides by x2 yields 2 Substituting V = y/x and replacing 2 2 y dy y − e−y /x + 2 x dx x 2 = 0. dy d dV with (xV ) = V + x , we obtain dx dx dx 2V V +x dV dx 2 − (e−V + 2V 2 ) = 0. Simplifying, we obtain 2 2 dV dx = e−V =⇒ eV (2V dV ) = . dx x Integrating on each side, we conclude that 2V x 2 eV = ln |x| + c1 =⇒ ey 2 /x2 = ln (cx) =⇒ y 2 = x2 ln (ln (cx)). dy y y 18. Dividing the given diﬀerential equation through on both sides by x2 yields = ( )2 + 3 + 1. dx x x dy d dV Substituting V = y/x and replacing with (xV ) = V + x , we obtain dx dx dx v+x dv dv dv dx = v 2 + 3v + 1 =⇒ x = (v + 1)2 =⇒ = . dx dx (v + 1)2 x Integrating on each side, we conclude that − 1 1 = ln |x| + c1 =⇒ − y = ln |x| + c1 v+1 x +1 y 1 =⇒ − +1 = x ln |x| + c1 y 1 =⇒ − = 1 + x ln |x| + c1 1 =⇒ y = −x 1 + . ln |x| + c1 19. Dividing the given diﬀerential equation through on both sides by y and factoring x from the right-hand side, we obtain y 1 + ( x )2 − 1 dy = . y dx x 70 √ dy 1+V2−1 d dV dV Substituting V = y/x and replacing with (xV ) = V + x , we obtain V + x = . dx dx dx dx V √ V dx dV 1+V2−1−V2 = , and separating variables, we ﬁnd that √ dV = . Therefore, x dx V x 1+V2−1−V2 c Integrating on both sides, we have ln |1 − V | = ln |x| + c1 =⇒ |x(1 − V )| = c2 =⇒ 1 − V = =⇒ V 2 = x c c c2 c2 − 2 + 1 =⇒ V 2 = 2 − 2 =⇒ y 2 = c2 − 2cx. 2 x x x x 20. Dividing through by x2 on both sides of the given diﬀerential equation, we ﬁnd that 2 y y dy y 4− . +2 = x dx x x dy d dV with (xV ) = V + x , we obtain dx dx dx Substituting V = y/x and replacing 2(V + 2) V + x dV dx = V (4 − V ). A short algebraic manipulations yields 2x dV 3V 2 =− . dx V +2 Separating variables, we obtain 2 V +2 dx dV = −3 . 2 V x Integrating on both sides, we have 2 ln |V | − 4 = −3 ln |x| + c1 =⇒ xy 2 = ce4x/y . V dy y dy 21. We rewrite the diﬀerential equation as = tan(y/x) + . Substituting V = y/x and replacing dx x dx d dV with (xV ) = V + x , we obtain dx dx V +x dV dx dV = tan V + V =⇒ x = tan V =⇒ cot V dV = . dx dx x Integrating on both sides, we ﬁnd that ln | sin V | = ln |x| + c1 =⇒ sin V = cx =⇒ V = sin−1 (cx) =⇒ y (x) = x sin−1 (cx). 22. We rewrite the diﬀerential equation as with dy = dx x y 2 y dy + 1 + . Substituting V = y/x and replacing x dx d dV (xV ) = V + x , we obtain dx dx V +x dV = dx 1 V 2 + 1 + V =⇒ x dV = dx 1 V 2 + 1 =⇒ dV 1 V 2 = +1 dx V dV dx =⇒ √ = . x x 1+V2 71 Next, we integrate on both sides to obtain V with y/x yields the implicit solution √ 1 + V 2 = ln |x| + c. A short algebraic manipulation and replacing y 2 = x2 (ln |x| + c)2 − 1 . 23. The given diﬀerential equation can be written as (x−4y )dy = (4x+y )dx. Converting to polar coordinates we have x = r cos θ =⇒ dx = cos θdr − r sin θdθ and y = r sin θdr + r cos θdθ. Substituting these results into the preceding diﬀerential equation and simplifying yields the separable equation 4r−1 dr = dθ which can be integrated directly to yield 4 ln r = θ + c, so that r = c1 eθ/4 . 24. THE INITIAL CONDITION GIVEN IN THE TEXT DOES NOT WORK. The given diﬀerential equation can be rewritten as dy = dx Substituting V = y/x and replacing 2 2y −1 x . y 1+ x d dV dy with (xV ) = V + x , we obtain dx dx dx V +x dV 2(2V − 1) = . dx 1+V Therefore, x −V 2 + 3V − 2 (V − 2)(V − 1) dV = =− . dx V +1 V +1 Separating variables, we have V +1 dx dV = − . (V − 2)(V − 1) x Using a partial fractions decomposition on the left-hand side, this becomes 2 3 − V −2 V −1 dV = − dx . x Integrating on each side, this becomes 3 ln |V − 2| − 2 ln |V − 1| = − ln |x| + c. Substituting V = y/x, we recover 3 ln y y − 2 − 2 ln − 1 = − ln |x| + c. x x y 2− dy x . Substituting V = y/x, replacing dy 25. The given diﬀerential equation can be written as = y dx dx 1+4 x d dV with (xV ) = V + x , and separating variables, we obtain dx dx V +x dV 2−V dV 2 − 2V − 4V 2 1 = =⇒ x = =⇒ dx 1 + 4V dx 1 + 4V 2 1 + 4V dV = − 2V 2 + V − 1 dx . x 72 Integrating on each side of the equation, we obtain 1 1 1 ln |2v 2 + v − 1| = − ln |x| + c =⇒ ln |x2 (2v 2 + v − 1)| = c =⇒ ln |2y 2 + yx − x2 | = c. 2 2 2 Since y (1) = 1, we have c = 1 1 1 ln 2. Thus ln |2y 2 + yx − x2 | = ln 2, which implies that 2y 2 + yx − x2 = 2. 2 2 2 26. The given diﬀerential equation can be written as replacing dy d dV with (xV ) = V + x , we obtain dx dx dx V +x dy y = − dx x dV =V − dx 1+ y2 . Substituting V = y/x and x2 1 + V 2. √ dV dx dV = − 1 + V 2 . Separating variables, we have √ = − . Integrating on each side of 2 dx x 1+V √ this equation, we obtain ln |V + 1 + V 2 | = − ln |x| + c. That is, Therefore, x ln y + x 1+ y2 = − ln |x| + c. x2 Since y (3) = 4, we ﬁnd that c = 2 ln 3 = ln 9. Hence, we have ln y + x 1+ y2 = − ln |x| + ln 9. x2 A short algebraic manipulation gives ln |y + Therefore, y + x2 + y 2 | = ln 9. x2 + y 2 = 9. 27. Dividing the given diﬀerential equation through by x, we have V = y/x and replacing dy y −= dx x dy d dV with (xV ) = V + x , we obtain dx dx dx V +x dV =V + dx 4 − V 2. Separating variables, we have √ dv dx = , 2 x 4−v and integration on both sides yields sin−1 since x > 0. v 2 = ln |x| + c =⇒ sin−1 y = ln x + c, 2x 4− y x 2 . Substituting 73 dV 1+V2 = . Separating the variables and intedx a−V y 1 1 grating, we obtain a tan−1 V − ln (1 + V 2 ) = ln x + ln c, or equivalently, a tan−1 − ln (x2 + y 2 ) = ln c. 2 x2 Substituting x = r cos θ and y = r sin θ yields aθ − ln r = ln c. Exponentiating then gives r = keaθ . √ (b) The initial condition y (1) = 1 corresponds to r( π ) = 2. Imposing this condition on the polar form √ −π/8 4 of the solution obtained in (a) yields k = 2e . Hence, the solution to the initial value problem is √ (θ−π/4)/2 dy 2x + y 1 = . Consequently, every solution r = 2e . When a = , the diﬀerential equation is 2 dx x − 2y x curve has a vertical tangent line at points of intersection with the line y = . The maximum interval of 2 x existence for the solution of the initial value problem can be obtained by determining where y = intersects 2 √ (θ−π/4)/2 1 x . The line y = has a polar equation tan θ = . The corresponding values of θ the curve r = 2e 2 2 1 −1 are θ = θ1 = tan ≈ 0.464, θ = θ2 = θ1 + π ≈ 3.61. Consequently, the x-coordinates of the intersection 2 √ √ points are x1 = r cos θ1 = 2e(θ1 −π/4)/2 cos θ1 ≈ 1.08, x2 = r cos θ2 = 2e(θ2 −π/4)/2 cos θ2 ≈ −5.18. Hence the maximum interval of existence for the solution is approximately (−5.18, 1.08). 28. (a) Substituting y = xV and simplifying yields x (c) y(x) 2 1 -5 -6 -4 -3 -2 -1 1 0 x -1 -2 -3 -4 Figure 50: Figure for Problem 28(c) 29. The given family of curves satisﬁes x2 + y 2 = 2cy , so implicit diﬀerentiation gives 2x + 2y Since c = dy dy = 2c . dx dx x2 + y 2 , we have 2y dy x 2xy = =2 . dx c−y x − y2 Therefore, the orthogonal trajectories satisfy y 2 − x2 dy = . Let y = xV so that dx 2xy dy dV =V +x . dx dx Substituting these results into the last equation yields x dV V2+1 y2 c2 =− =⇒ ln |V 2 + 1| = − ln |x| + c1 =⇒ 2 + 1 = =⇒ x2 + y 2 = 2c2 x. dx 2V x x 74 y(x) x Figure 51: Figure for Problem 29 30. The given family of curves satisﬁes (x − c)2 + (y − c)2 = 2c2 , so implicit diﬀerentiation gives 2(x − c) + dy x2 + y 2 2(y − c) = 0. Since c = , we have dx 2(x + y ) dy c−x y 2 − 2xy − x2 = =2 . dx y−c y + 2xy − x2 dy y 2 + 2xy − x2 dV dy . Let y = xV so that =2 = V +x . 2 dx x + 2xy − y dx dx dV V 2 + 2V − 1 Substituting these results into the last equation yields V + x = . Separating the variables, dx 1 + 2V − V 2 1 + 2V − V 2 1 we have 3 dv = dx. A partial fractions decomposition can be used on the left-hand side to 2+V −1 V −V x give 1 2V 1 − dV = dx. V −1 V2+1 x V −1 Integration gives ln |V − 1| − ln |V 2 + 1| = ln |x| + c1 , for some constant c1 . That is, 2 = c2 x. Replacing V +1 V with y/x and doing a short algebraic manipulation gives the equations of the orthogonal trajectories: Therefore, the orthogonal trajectories satisfy x2 + y 2 = 2k (x y ) =⇒ (x − k )2 + (y + k )2 = 2k 2 . y(x) x Figure 52: Figure for Problem 30 31. (a) Let r represent the radius of one of the √ circles with center at (a, ma) and passing through (0, 0). Then we have r = (a − √ 2 + (ma − 0)2 = |a| 1 + m2 . Thus, the circle’s equation can be written as 0) (x − a)2 + (y − ma)2 = (|a| 1 + m2 )2 , or (x − a)2 + (y − ma)2 = a2 (1 + m2 ). 75 (b) From part (a), we have (x − a)2 + (y − ma)2 = a2 (1 + m2 ). Therefore, a = the ﬁrst equation with respect x and solving we obtain yields x2 + y 2 . Diﬀerentiating 2(x + my ) dy a−x = . Substituting for a and simplifying dx y − ma dy y 2 − x2 − 2mxy = . Therefore, the orthogonal trajectories satisfy dx my 2 − mx2 + 2xy y m − m( y )2 − 2 x mx2 − my 2 − 2xy dy =2 = y2 x y. dx y − x2 − 2mxy ( x ) − 1 − 2m x Let y = xV , so that dy dV =V +x . Substituting these results into the last equation yields dx dx V +x m − mV 2 − 2V xdV (m − V )(1 + V 2 ) dV =2 =⇒ = dx V − 1 − 2mV dx V 2 − 2mV − 1 2 dx V − 2mV − 1 dV = =⇒ (m − V )(1 + V 2 ) x dV 2V dx =⇒ − dV = V −m 1+V2 x 2 =⇒ ln |V − m| − ln (1 + V ) = ln |x| + c1 =⇒ V − m = c2 x(1 + V 2 ) =⇒ y − mx = c2 x2 + c2 y 2 =⇒ x2 + y 2 + cmx − cy = 0. Completing the square we obtain (x + cm/2)2 + (y − c/2)2 = c2 /4(m2 + 1). Now letting b = c/2, the last equation becomes (x + bm)2 + (y − b)2 = b2 (m2 + 1) which is a family of circles lying on the line x = −my and passing through the origin. (c) y(x) x Figure 53: Figure for Problem 31(c) 32. Implicit diﬀerentiation of x2 + y 2 = c with respect to x yields m1 = x dy = − = m2 . Now dx y m2 − tan ( π ) −x/y − 1 x+y 4 = . π= 1 + m2 tan ( 4 ) 1 − x/y x−y 76 Let y = xV , so that dy dV =V +x . Substituting these results into the last equation yields dx dx V +x dx dV 1+V 1−V dV = = =⇒ dx 1−V 1+V2 x V 1 dx =⇒ − +2 dV = 2 1+V V +1 x 1 =⇒ − ln (1 + V 2 ) + tan−1 V = ln |x| + c1 . 2 Therefore, the equations of the oblique trajectories are ln (x2 + y 2 ) − 2 tan−1 (y/x) = c2 . 33. We have dy = 6y/x = m2 . Now dx m1 = Let y = xV , so that m2 − tan ( π ) 6y/x − 1 6y − x 4 = . π= 1 + m2 tan ( 4 ) 1 + 6y/x 6y + x dv dy = v + x . Substitute these results into the last equation yields dx dx V +x dV 6V − 1 dV (3V − 1)(1 − 2V ) = =⇒ x = dx 6V + 1 dx 6V + 1 9 8 =⇒ − dV = 3V − 1 2V − 1 dx x =⇒ 3 ln |3V − 1| − 4 ln |2V − 1| = ln |x| + c1 . Therefore, the equations of the oblique trajectories are (3y − x)3 = k (2y − x)4 . 34. Implicit diﬀerentiation of x2 + y 2 = 2cx with respect to x gives 2x + 2y Since c = dy dy c−x = 2c, so that = . dx dx y x2 + y 2 dy y 2 − x2 , we ﬁnd that = = m2 . Now 2x dx 2xy m1 = m2 − tan ( π ) 4 1 + m2 tan ( π ) 4 y 2 − x2 −1 y 2 − x2 − 2xy 2xy = =2 . y − x2 + 2xy y 2 − x2 1+ 2xy 77 Let y = xV , so that dy dV =V +x . Substituting these results into the last equation yields dx dx V +x dV V 2 − 2V − 1 dV −V 3 − V 2 − V − 1 =2 =⇒ x = dx V + 2V − 1 dx V 2 + 2V − 1 2 −V − 2V + 1 dx =⇒ 3 dV = 2+V +1 V +V x 1 2V =⇒ − dV = V +1 V2+1 dx x =⇒ ln |V + 1| − ln (V 2 + 1) = ln |x| + c1 =⇒ ln |y + x| = ln |y 2 + x2 | + c1 . Therefore, the equations of the oblique trajectories are x2 + y 2 = 2k (x + y ) or, equivalently, (x − k )2 + (y − k )2 = 2k 2 . 35. (a) From y = cx−1 , we ﬁnd that y dy = −cx−2 = − . Therefore, dx x m1 = m2 − tan α0 −(y/x) − tan α0 = . 1 + m2 tan α0 1 − (y/x) tan α0 dV dy =V +x . Substituting these results into the last equation yields dx dx dV tan α0 + V 2V tan α0 − 2 2dx V +x = =⇒ 2 dV = − dx V tan α0 − 1 V tan α0 − 2V − tan α0 x =⇒ ln |V 2 tan α0 − 2V − tan α0 | = −2 ln |x| + c1 Let y = xV , so that =⇒ (y 2 − x2 ) tan α0 − 2xy = k. (b) See accompanying ﬁgure. y(x) x Figure 54: Figure for Problem 35(b) 36. (a) From x2 + y 2 = c, we ﬁnd that m1 = dy x = − . Therefore, dx y m2 − tan α0 −x/y − m x + my = = . 1 + m2 tan α0 1 − (x/y )m mx − y 78 Let y = xV , so that dy dV =V +x . Substituting these results into the last equation yields dx dx V +x 1 + mV dV 1+V2 dV = =⇒ x = dx m−V dx m−V V −m dx =⇒ dV = − 1+V2 x m dx V − dV = − =⇒ 2 2 1+V 1+V x 1 =⇒ ln (1 + V 2 ) − m tan V = − ln |x| + c1 . 2 In polar coordinates, r = x2 + y 2 and θ = tan−1 y/x, so this result becomes ln r − mθ = c1 =⇒ r = emθ , where k is an arbitrary constant. (b) See accompanying ﬁgure. y(x) x Figure 55: Figure for Problem 36(b) 1 dy − y 2 = 4x2 cos x. Let 37. This is a Bernoulli equation. Multiplying both sides by y results in y dx x du dy dy 1 du dy 1 u = y 2 , so = 2y or y = . Substituting these results into y − y 2 = 4x2 cos x yields dx dx dx 2 dx dx x du 2 2 −2 − u = 8x cos x, which has an integrating factor I (x) = x . Therefore, dx x d −2 (x u) = 8 cos x =⇒ x−2 u = 8 cos xdx + c dx =⇒ x−2 u = 8 sin x + c =⇒ u = x2 (8 sin x + c) =⇒ y 2 = x2 (8 sin x + c) √ =⇒ y = x 8 sin x + c. du dy dy 1 du 38. This is a Bernoulli equation. Let u = y −2 , so = −2y −3 or y −3 = . Substituting these dx dx dx 2 dx du results into the last equation yields − u tan x = −4 sin x. An integrating factor for this equation is dx 79 I (x) = cos x. Thus, d (u cos x) = −4 cos x sin x =⇒ u cos x = 4 cos x sin xdx dx 1 =⇒ u = (cos2 x + c) cos x c =⇒ y −2 = 2 cos x + cos x 1 =⇒ y (x) = c. 2 cos x + cos x du 1 39. This is a Bernoulli equation. Let u = y 2/3 , so that we have − u = 4x2 ln x. An integrating factor dx x 1 for this equation is I (x) = . Thus, x d −1 (x u) = 4x ln x =⇒ x−1 u = 4 x ln xdx + c dx =⇒ x−1 u = 2x2 ln x − x2 + c =⇒ u = x(2x2 ln x − x2 + c) =⇒ y 2/3 = x(2x2 ln x − x2 + c) =⇒ y (x) = x(2x2 ln x − x2 + c) 40. This is a Bernoulli equation. Let u = y 1/2 , so that we have for this equation is I (x) = x, so d (xu) = 3x 1 + x2 =⇒ xu = 3 dx . √ du 1 + u = 3 1 + x2 . An integrating factor dx x x 1 + x2 dx + c =⇒ xu = (1 + x2 )3/2 + c 1 c =⇒ u = (1 + x2 )3/2 + x x 1 c 1/2 2 3/2 =⇒ y = (1 + x ) + x x 1 c =⇒ y (x) = (1 + x2 )3/2 + x x 41. This is a Bernoulli equation. Let u = y −1 , so that we have this equation is I (x) = x−2 , so that 3 /2 2 . du 2 − u = −6x4 . An integrating factor for dx x d −2 (x u) = −6x2 =⇒ x−2 u = −2x3 + c dx =⇒ u = −2x5 + cx2 =⇒ y −1 = −2x5 + cx2 1 =⇒ y (x) = 2 . x (c − 2x3 ) 80 42. Rewriting this equation as y + 2y = −y 3 x2 , we see that it is Bernoulli. Let u = y −2 , so that we have x du 1 1 − u = 2x2 . An integrating factor for this equation is I (x) = , so dx x x d −1 (x u) = 2x =⇒ x−1 u = x2 + c dx =⇒ u = x3 + cx =⇒ y −2 = x3 + cx =⇒ y (x) = 1 . x3 + cx 2(b − a) √ y = y , we see that it is Bernoulli. Let u = y 1/2 , so that (x − a)(x − b) du (b − a) 1 x−a we have − u = . An integrating factor for this equation is I (x) = , so that dx (x − a)(x − b) 2 x−b 43. Rewriting this equation as y − d dx x−a u x−b = x−a x−a 1 =⇒ u = [x + (b − a) ln |x − b| + c] 2(x − b) x−b 2 x−b =⇒ y 1/2 = [x + (b − a) ln |x − b| + c] 2(x − a) =⇒ y (x) = 1 4 x−b x−a 44. This is a Bernoulli equation. Let u = y 1/3 , so that we have this equation is I (x) = x2 , so 2 2 [x + (b − a) ln |x − b| + c] . du 2 cos x + u= . An integrating factor for dx x x d2 (x u) = x cos x =⇒ x2 u = cos x + x sin x + c dx cos x + x sin x + c =⇒ y 1/3 = x2 cos x + x sin x + c =⇒ y (x) = x2 45. This is a Bernoulli equation. Let u = y 1/2 , so that we have 2 3 . du + 2xu = 2x3 . An integrating factor for dx this equation is I (x) = ex , so 2 2 2 d x2 (e u) = 2ex x3 =⇒ ex u = ex (x2 − 1) + c dx 2 =⇒ y 1/2 = x2 − 1 + ce−x 2 =⇒ y (x) = [(x2 − 1) + ce−x ]2 . 46. This is a Bernoulli equation. Let u = y −2 , so that we have du 1 + u = −4x. An integrating factor dx x ln x 81 for this equation is I (x) = ln x, so d (u ln x) = −4x ln x =⇒ u ln x = x2 − 2x2 ln x + c dx ln x =⇒ y 2 = 2 . x (1 − 2 ln x) + c du 1 + u = 3x. An integrating factor for dx x 47. This is a Bernoulli equation. Let u = y 1−π , so that we have this equation is I (x) = x, so d (xu) = 3x2 =⇒ xu = x3 + c dx x3 + c =⇒ y 1−π = x x3 + c =⇒ y (x) = x 48. This is a Bernoulli equation. Let u = y 2 , so that we have for this equation is I (x) = sin x, so 1/(1−π ) . du + u cot x = 8 cos3 x. An integrating factor dx d (u sin x) = 8 cos3 x sin x =⇒ u sin x = −2 cos4 x + c dx −2 cos4 x + c . =⇒ y 2 = sin x √ 49. This is a Bernoulli equation. Let u = y 1− 3 , so that we have for this equation is I (x) = sec x + tan x, so that du + u sec x = sec x. An integrating factor dx d [(sec x + tan x)u] = sec x(sec x + tan x) =⇒ (sec x + tan x)u = tan x + sec x + c dx √ 1 =⇒ y 1− 3 = 1 + sec x + tan x =⇒ y (x) = 50. This is a Bernoulli equation. Let u = y −1 , so that we have c 1+ sec x + tan x √ 1/(1− 3) . du 2x − u = −x. An integrating factor dx 1 + x2 82 for this equation is I (x) = d dx 1 , so that 1 + x2 u 1 + x2 =− x u xdx =⇒ =− +c 2 2 1+x 1+x 1 + x2 u 1 =⇒ = − ln (1 + x2 ) + c 1 + x2 2 1 =⇒ u = (1 + x2 ) − ln (1 + x2 ) + c 2 1 =⇒ y −1 = (1 + x2 ) − ln (1 + x2 ) + c . 2 Since y (0) = 1, we ﬁnd that c = 1. Therefore 1 1 = (1 + x2 ) − ln (1 + x2 ) + 1 . y 2 Hence, y (x) = 1 . 1 2 ) − ln (1 + x2 ) + 1 (1 + x 2 51. This is a Bernoulli equation. Let u = y −2 , so that we have factor for this equation is I (x) = csc2 x, so that du − 2u cot x = −2 sin3 x. An integrating dx d (u csc2 x) = − sin x =⇒ u csc2 x = 2 cos x + c. dx Since y (π/2) = 1, we ﬁnd that c = 1. Thus y 2 = y (x) = 52. Let V = ax + by + c, so that We conclude that 1 . Hence, sin x(2 cos x + 1) 2 1 . sin2 x(2 cos x + 1) dV dy dy dV dy 1 = a + b . Hence, b = − a, and thus, = dx dx dx dx dx b dV dV dV − a = bF (V ) =⇒ = bF (V ) + a =⇒ = dx. dx dx bF (V ) + a 53. Let V = 9x − y , so that dy dV dV =9− . Hence, = 9 − V 2 , and so dx dx dx dV = 9−V2 However, y (0) = 0, so c = 0. Thus, tanh−1 dx =⇒ 9x − y 3 1 tanh−1 (V /3) = x + c1 . 3 = 3x, or y (x) = 3(3x − tanh 3x). dV −a dx = F (V ). 83 54. Let V = 4x + y + 2, so that dV dy dV =4+ = 4 + V 2 =⇒ 2 = dx dx dx V +4 dV =⇒ = dx V2+4 1 =⇒ tan−1 (V /2) = x + c1 2 =⇒ tan−1 (2x + y/2 + 1) = 2x + c =⇒ y (x) = 2[tan (2x + c) − 2x − 1]. 55. Let V = 3x − 3y + 1, so that 1 dV 1 dV dy =1− =⇒ 1 − = sin2 V dx 3 dx 3 dx dV =⇒ = 3 cos2 V dx =⇒ sec2 V dV = 3 dx =⇒ tan V = 3x + c =⇒ tan (3x − 3y + 1) = 3x + c 1 =⇒ y (x) = [3x − tan−1 (3x + c) + 1]. 3 56. Since V = xy , we have V = xy + y , so that y = (V − y )/x. Substitution into the diﬀerential equation yields (V − y )/x = yF (V )/x =⇒ V = y [F (V ) + 1] =⇒ V = V [F (V ) + 1]/x, 1 1 dV =. so that V [F (V ) + 1] dx x 57. Substituting into 1 dV 1 = for F (V ) = ln V − 1 yields V [F (V ) + 1] dx x 1 1 1 dV = dx =⇒ ln(ln V ) = ln cx =⇒ V = ecx =⇒ y (x) = ecx . V ln V x x dy dv 58. (a) Since x = u − 1 and y = v + 1, we have that = . Substitution into the given diﬀerential dx du dv u + 2v equation yields = . du 2u − v (b) The diﬀerential equation obtained in (a) is ﬁrst-order homogeneous. We therefore let W = v/u and 1 + 2W 1 + W2 substitute into the diﬀerential equation to obtain W u + W = =⇒ W u = . Separating the 2−W 2−W 2 W 1 variables yields − dW = du. This can be integrated directly to obtain 1 + W2 1 + W2 u 2 tan−1 W − 1 ln (1 + W 2 ) = ln u + ln c. 2 84 −1 −1 Simplifying, we obtain cu2 (1 + W 2 ) = e4 tan W =⇒ c(u2 + v 2 ) = etan −1 and y yields c[(x + 1)2 + (y − 1)2 ] = etan [(y−1)/(x+1)] . (v/u) . Substituting back in for x 59. (a) Taking the derivative of y = Y (x) + v −1 (x) with respect to x gives y = Y (x) − v −2 (x)v (x). Now substitute into the given diﬀerential equation and simplify algebraically to obtain Y (x) + p(x)Y (x) + q (x)Y 2 (x) − v −2 (x)v (x) + v −1 (x)p(x) + q (x)[2Y (x)v −1 (x) + v −2 (x)] = r(x). We are told that Y (x) is a particular solution to the given diﬀerential equation, and therefore Y (x) + p(x)Y (x) + q (x)Y 2 (x) = r(x). Consequently, the transformed diﬀerential equation reduces to −v −2 (x)v (x) + v −1 p(x) + q (x)[2Y (x)v −1 (x) + v −2 (x)] = 0, or equivalently, v − [p(x) + 2Y (x)q (x)]v = q (x). (b) The given diﬀerential equation can be written as y − x−1 y − y 2 = x−2 , which is a Riccati diﬀerential equation with p(x) = −x−1 , q (x) = −1, and r(x) = x−2 . Since y (x) = −x−1 is a solution to the given diﬀerential equation, we make the substitution y (x) = −x−1 + v −1 (x). According to the result from part (a), the given diﬀerential equation then reduces to v − (−x−1 + 2x−1 )v = −1, or equivalently v − x−1 v = −1. d −1 (x v ) = This linear diﬀerential equation has an integrating factor I (x) = x−1 , so that v must satisfy dx −1 −x =⇒ v (x) = x(c − ln x). Hence the solution to the original equation is y (x) = − 1 1 1 + = x x(c − ln x) x 1 −1 . c − ln x 60. (a) If y = axr , then y = arxr−1 . Substituting these expressions into the given diﬀerential equation yields arxr−1 + 2axr−1 − a2 x2r = −2x−2 . For this to hold for all x > 0, the powers of x must match up on either side of the equation. Hence, r = −1. Then a is determined from the quadratic −a + 2a − a2 = −1 ⇐⇒ a2 − a − 2 = 0 ⇐⇒ (a − 2)(a + 1) = 0. Consequently, a = 2 or a = −1 in order for us to have a solution to the given diﬀerential equation. Therefore, two solutions to the diﬀerential equation are y1 (x) = 2x−1 , y2 (x) = −x−1 . (b) Taking Y (x) = 2x−1 and using the result from part (a) of the previous problem, we now substitute y (x) = 2x−1 + v −1 into the given Riccati equation. The result is (−2x−2 − v −2 v ) + 2x−1 (2x−1 + v −1 ) − (4x−2 + 4x−1 v −1 + v −2 ) = −2x−2 . Simplifying this equation yields the linear equation v + 2x−1 v = −1. Multiplying by the integrating factor −1 d2 I (x) = e 2x dx = x2 results in the integrable diﬀerential equation (x v ) = −x2 . Integrating this dx diﬀerential equation we obtain 1 v (x) = x2 − x3 + c 3 = 12 x (c1 − x3 ). 3 Consequently, the general solution to the Riccati equation is y (x) = 2 3 + . x x2 (c1 − x3 ) 85 61. (a) From y = x−1 + w(x), we have y = −x−2 + w . Substituting into the given diﬀerential equation yields (−x−2 + w ) + 7x−1 (x−1 + w) − 3(x−2 + 2x−1 w + w2 ) = 3x−2 , which simpliﬁes to w + x−1 w − 3w2 = 0. (b) The preceding equation can be written in the equivalent form w−2 w + x−1 w−1 = 3. We let u = w−1 , so that u = −w−2 w . Substitution into the diﬀerential equation gives, after simpliﬁcation, u − x−1 u = −3. An integrating factor for this linear diﬀerential equation is I (x) = x−1 , so that the diﬀerential equation can d −1 (x u) = −3x−1 . Integrating, we obtain be written in the integrable form dx u(x) = x(−3 ln x + c) and so w(x) = 1 . x(c − 3 ln x) Consequently, the solution to the original Riccati equation is y (x) = 1 x 1 c − 3 ln x 1+ . 1 dy du du = and the given equation becomes + p(x)u = q (x), which is dx y dx dx ﬁrst-order linear and has a solution of the form 62. If we let u = ln y , then u = e− p(x)dx p(x)dx e q (x)dx + c . Substituting ln y = e− p(x)dx e p(x)dx q (x)dx + c into u = ln y , we obtain I −1 y (x) = e where I (x) = e p(t)dt I (t)q (t)dt+c , and c is an arbitrary constant. 63. We let u = ln y and use the technique of the preceding problem: 2 u=e dx x −2 e dx x 1 − 2 ln x x dx + c1 = x2 du 2 1 − 2 ln x − u= and dx x x 1 − 2 ln x dx + c1 = ln x + cx2 , x3 and since u = ln y , we have ln y = ln x + cx2 . Now y (1) = e so c = 1. Thus, the solution to the initial-value 2 problem is y (x) = xex . du dy du 64. If u = f (y ), then = f (y ) , and the given equation becomes + p(x)u = q (x), which is ﬁrst-order dx dx dx linear with a solution of the form u(x) = e− p(x)dx e p(x)dx q (x)dx + c . 86 Substituting f (y ) = e− p(x)dx e p(x)dx q (x)dx + c into u = f (y ) and using the fact that f is invertible, we obtain y (x) = f −1 I −1 where I (x) = e p(t)dt I (t)q (t)dt + c , and c is and arbitrary constant. 65. Let u = tan y so that du dy = sec2 y and the given equation becomes ﬁrst-order linear: dx dx 1 1 du +√ u= √ . dx 2 1 + x 2 1+x √ An integrating factor for this equation is I (x) = e 1+x , so that √ √ √ e 1+x d √1+x u) = √ (e =⇒ e 1+x u = dx 2 1+x √ =⇒ e 1+x √ u=e e 1+x √ 2 1+x 1+x √ − 1+x =⇒ u = 1 + ce √ But u = tan y , so that tan y = 1 + ce− 1+x +c . or √ y (x) = tan−1 (1 + ce− 1+x ). Solutions to Section 1.9 True-False Review: 1. FALSE. The requirement, as stated in Theorem 1.9.4, is that My = Nx , not Mx = Ny , as stated. 2. FALSE. A potential function φ(x, y ) is not an equation. The general solution to an exact diﬀerential equation takes the form φ(x, y, ) = c, where φ(x, y ) is a potential function. 3. FALSE. According to Deﬁnition 1.9.2, M (x)dx + N (y )dy = 0 is only exact if there exists a function φ(x, y ) such that φx = M and φy = N for all (x, y ) in a region R of the xy -plane. 4. TRUE. This is the content of part 1 of Theorem 1.9.11. 5. FALSE. If φ(x, y ) is a potential function for M (x, y )dx + N (x, y )dy = 0, then so is φ(x, y ) + c for any constant c. 6. TRUE. We have My = 2e2x − cos y and Nx = 2e2x − cos y, and so since My = Nx , this equation is exact. 7. FALSE. We have My = (x2 + y )2 (−2x) + 4xy (x2 + y ) (x2 + y )4 87 and Nx = (x2 + y )2 (2x) − 2x2 (x2 + y )(2x) . (x2 + y )4 Thus, My = Nx , and so this equation is not exact. 8. FALSE. We have My = 2y and Nx = 2y 2 , and since My = Nx , we conclude that this equation is not exact. 9. FALSE. We have My = ex sin y cos y + xex sin y cos y and Nx = cos y sin yex sin y , and since My = Nx , we conclude that this equation is not exact. Problems: 1. We have M = y + 3x2 and N = x. Thus, My = 1 and Nx = 1. Since My = Nx , the diﬀerential equation is exact. 2. We have M = cos (xy ) − xy sin (xy ) and N = −x2 sin (xy ). Thus, My = −2x sin (xy ) − x2 y cos (xy ) and Nx = −2x sin (xy ) − x2 y cos (xy ). Since My = Nx , the diﬀerential equation is exact. 3. We have M = yexy and N = 2y − xe−xy . Thus, My = yxexy + exy and Nx = xye−xy − e−xy . Since My = Nx , the diﬀerential equation is not exact. 4. We have M = 2xy and N = x2 + 1. Thus, My = 2x and Nx = 2x. Since My = Nx , the diﬀerential ∂φ dh(x) ∂φ = 2xy and (b) = 2xy + . equation is exact so there exists a potential function φ such that (a) ∂x ∂x dx dh(x) dh(x) =⇒ = 0 =⇒ h(x) is a constant. Since we need just one potential From (a), 2xy = 2xy + dx dx 2 function, let h(x) = 0. Thus, φ(x, y ) = (x + 1)y ; hence, (x2 + 1)y = c. 5. We have M = y 2 + cos x and N = 2xy + sin y . Thus, My = 2y and Nx = 2y . Since My = Nx , ∂φ the diﬀerential equation is exact so there exists a potential function φ such that (a) = y 2 + cos x and ∂x ∂φ ∂φ dh(y ) (b) = 2xy + sin y . From (a), φ(x, y ) = xy 2 + sin x + h(y ) =⇒ = 2xy + , and so from (b), ∂y ∂y dy dh(y ) dh 2xy + = 2xy + sin y =⇒ = sin y =⇒ h(y ) = − cos y , where the constant of integration has been dy dy set to zero since we just need one potential function. Therefore, we have φ(x, y ) = xy 2 + sin x − cos y ; hence, xy 2 + sin x − cos y = c. xy − 1 xy + 1 dx + dy = 0, we have My = Nx = 1. Therefore, the diﬀerential equation is exact, x y ∂φ xy − 1 ∂φ xy + 1 and so there exists a potential function φ such that (a) = and (b) = . From (a), ∂x x ∂y y ∂φ dh(y ) dh(y ) xy + 1 dh(y ) φ(x, y ) = xy − ln |x| + h(y ) =⇒ = x+ , and so from (b), x + = =⇒ = y −1 =⇒ ∂y dy dy y dy h(y ) = ln |y |, where the constant of integration has been set to zero since we need just one potential function. Therefore, we have φ(x, y ) = xy + ln |y/x|; hence, xy + ln |x/y | = c. 6. Given 88 7. Given (4e2x + 2xy − y 2 )dx + (x − y )2 dy = 0, we have My = Nx = 2y . Therefore, the diﬀerential ∂φ equation is exact, and so there exists a potential function φ such that (a) = 4e2x + 2xy − y 2 and (b) ∂x ∂φ y3 ∂φ dh(x) = (x − y )2 . From (b), φ(x, y ) = x2 y − xy 2 + + h(x) =⇒ = 2xy − y 2 + , and so from (a), ∂y 3 ∂x dx dh(x) dh(x) 2xy − y 2 + = 4e2x + 2xy − y 2 =⇒ = 4e2x =⇒ h(x) = 2e2x , where the constant of integration has dx dx y3 been set to zero since we need just one potential function. Therefore, we have φ(x, y ) = x2 y − xy 2 + +2e2x ; 3 hence, y3 + 2e2x = c1 =⇒ 6e2x + 3x2 y − 3xy 2 + y 3 = c. x2 y − xy 2 + 3 8. Given (y 2 − 2x)dx + 2xydy = 0, we have My = Nx = 2xy . Therefore, the diﬀerential equation is exact ∂φ ∂φ = y 2 − 2x and (b) = 2xy . From (b), φ(x, y ) = and there exists a potential function φ such that (a) ∂x ∂y ∂φ dh(x) dh(x) dh(x) xy 2 + h(x) =⇒ = y2 + , and so from (a), y 2 + = y 2 − 2x =⇒ = −2x =⇒ h(x) = −2x, ∂x dx dx dx where the constant of integration has been set to zero since we just need one potential function. Therefore, we have φ(x, y ) = xy 2 − x2 ; hence, xy 2 − x2 = c. y 2 − x2 x dy = 0, we have My = Nx = 2 . Therefore, the diﬀerential 2 +y (x + y 2 )2 1 y ∂φ ∂φ equation is exact and there exists a potential function φ such that (a) and (b) = −2 = ∂x x x + y2 ∂y x y ∂φ dh(x) =− 2 , and so from (a), . From (b), we have φ(x, y ) = tan−1 (y/x) + h(x) =⇒ + x2 + y 2 ∂x x + y2 dx y dh(x) 1 y dh −2 + = −2 =⇒ = x−1 =⇒ h(x) = ln |x|, where the constant of integration is set x + y2 dx x x + y2 dx to zero since we only need one potential function. Therefore, we have φ(x, y ) = tan−1 (y/x) + ln |x|; hence, tan−1 (y/x) + ln |x| = c. 9. Given 1 y − x x2 + y 2 dx + x2 x dy = 0, we have My = Nx = y −1 . Therefore, the diﬀerential equation is exact y ∂φ ∂φ x and there exists a potential function φ such that (a) = 1 + ln (xy ) and (b) = . From (b), we have ∂x ∂y y ∂φ dh(x) dh(x) φ(x, y ) = x ln y + h(x). Therefore, = ln y + , and so from (a), ln y + = 1 ln (xy ). Hence, ∂x dx dx dh = 1 + ln x =⇒ h(x) = x ln x, where the constant of integration is set to zero since we only need one dx potential function. Therefore, we have φ(x, y ) = x ln y + x ln x; hence, x ln y + x ln x = c, or x ln (xy ) = c. 10. Given [1 + ln (xy )]dx + 11. Given [y cos (xy ) − sin x]dx + x cos (xy )dy = 0, we have My = Nx = −xy sin (xy )+cos (xy ). Therefore, the ∂φ diﬀerential equation is exact and there exists a potential function φ such that (a) = y cos (xy ) − sin x and ∂x ∂φ ∂φ dh(x) (b) = x cos (xy ). From (b), we have that φ(x, y ) = sin (xy )+h(x) =⇒ = y cos (xy )+ . Therefore, ∂y ∂x dx dh(x) dh from (a), we have y cos (xy ) + = y cos (xy ) − sin x. We conclude that = − sin x =⇒ h(x) = cos x, dx dx where the constant of integration is set to zero since we only need one potential function. Therefore, we have 89 φ(x, y ) = sin (xy ) + cos x; hence, sin (xy ) + cos x = c. 12. Given (2xy + cos y )dx + (x2 − x sin y − 2y )dy = 0, we have My = Nx = 2x − sin y . Therefore, the ∂φ diﬀerential equation is exact and there exists a potential function φ such that (a) = 2xy + cos y and (b) ∂x ∂φ dh(y ) ∂φ = x2 − x sin y − 2y . From (a), we have φ(x, y ) = x2 y + x cos y + h(y ) =⇒ = x2 − x sin y + . ∂y ∂y dy dh(y ) dh Therefore, from (b), we have x2 − x sin y + = x2 − x sin y − 2y =⇒ = −2y =⇒ h(y ) = −y 2 , where dy dy the constant of integration has been set to zero since we only need one potential function. Therefore, we have φ(x, y ) = x2 y + x cos y − y 2 ; hence, x2 y + x cos y − y 2 = c. 13. Given (3x2 ln x + x2 − y )dx − xdy = 0, we have My = Nx = −1. Therefore, the diﬀerential equation is ∂φ ∂φ = 3x2 ln x + x2 − y and (b) = −x. From exact and there exists a potential function φ such that (a) ∂x ∂y ∂φ dh(x) dh(x) (b), we have φ(x, y ) = −xy + h(x) =⇒ = −y + . Therefore, from (a), we have −y + = ∂x dx dx dh(x) 3x2 ln x + x2 − y =⇒ = 3x2 ln x + x2 =⇒ h(x) = x3 ln x, where the constant of integration has been dx set to zero since we only need one potential function. Therefore, we have φ(x, y ) = −xy + x3 ln x; hence, x3 ln x + 5 −xy + x3 ln x = c. Now since y (1) = 5, we have c = −5. Thus, x3 ln x − xy = −5, or y (x) = . x dy + 4xy = 3 sin x =⇒ (4xy − 3 sin x)dx + 2x2 dy = 0, we have My = Nx = 4x. Therefore, the dx ∂φ diﬀerential equation is exact, and so there exists a potential function φ such that (a) = 4xy − 3 sin x and ∂x ∂φ dh(x) ∂φ = 2x2 . From (b), we have φ(x, y ) = 2x2 y + h(x) =⇒ = 4xy + , and so from (a), we have (b) ∂y ∂x dx dh(x) dh(x) 4xy + = 4xy − 3 sin x =⇒ = −3 sin x =⇒ h(x) = 3 cos x, where the constant of integration has dx dx been set to zero since we only need one potential function. Therefore, we have φ(x, y ) = 2x2 y + 3 cos x; hence 3 − 3 cos x 2x2 y + 3 cos x = c. Now since y (2π ) = 0, we have c = 3. Therefore, 2x2 y + 3 cos x = 3, or y (x) = . 2x2 14. Given 2x2 15. Given (yexy + cos x)dx + xexy dy = 0, we have My = Nx = xyexy + exy . Therefore, the diﬀerential ∂φ = yexy + cos x and (b) equation is exact, and so there exists a potential function φ such that (a) ∂x ∂φ ∂φ dh(x) = xexy . From (b), we have φ(x, y ) = exy + h(x) =⇒ = yexy + , and so from (a), we have ∂y ∂x dx dh(x) yexy + cos x =⇒ = cos x =⇒ h(x) = sin x, where the constant of integration is set to zero since we dx only need one potential function. Therefore, we have φ(x, y ) = exy + sin x; hence, exy + sin x = c. Now since ln (2 − sin x) y (π/2) = 0, we have c = 2. Thus, exy + sin x = 2, and hence, y (x) = . x 16. If φ(x, y ) is a potential function for M dx + N dy = 0, then d(φ(x, y )) = 0. Therefore, d(φ(x, y ) + c) = d(φ(x, y )) + d(c) = 0 + 0 = 0, which implies that φ(x, y ) + c is also a potential function. 17. We have M = cos (xy )[tan (xy ) + xy ] and N = x2 cos (xy ), 90 so that My = 2x cos (xy ) − x2 y sin (xy ) = Nx . Thus, M dx + N dy = 0 is exact. We conclude that I (x, y ) = cos (xy ) is an integrating factor for [tan (xy ) + xy ]dx + x2 dy = 0. 18. We have M = sec x[2x − (x2 + y 2 ) tan x] and N = 2y sec x, so that My = −2y sec x tan x and Nx = 2y sec x tan x. Therefore, My = Nx . We conclude that M dx + N dy = 0 is not exact, and I (x) = sec x is not an integrating factor for [2x − (x2 + y 2 ) tan x]dx + 2ydy = 0. 19. We have M = e−x/y (x2 y −1 − 2x) and N = −e−x/y x3 y −2 , so that My = e−x/y (x3 y −3 − 3x2 y −2 ) = Nx . Thus, M dx + N dy = 0 is exact. We conclude that I (x, y ) = y −2 e−x/y is an integrating factor for y [x2 − 2xy ]dx − x3 dy = 0. 20. Given (xy − 1)dx + x2 dy = 0, we have M = xy − 1 and N = x2 . Thus My = x and Nx = 2x, so My − N x = −x−1 = f (x) is a function of x alone so I (x) = e f (x)dx = x−1 is an integrating factor for the N given equation. Multiplying the given equation by I (x) results in the exact equation (y − x−1 )dx + xdy = 0. We ﬁnd that φ(x, y ) = xy − ln |x| and hence, the general solution of our diﬀerential equation is xy − ln |x| = c. 21. Given ydx − (2x + y 4 )dy = 0, we have M = y and N = −(2x + y 4 ). Thus My = 1 and Nx = −2, My − N x so = 3y −1 = g (y ) is a function of y alone so I (y ) = e− g(y)dy = 1/y 3 is an integrating factor M for the given diﬀerential equation. Multiplying the given equation by I (y ) results in the exact equation y −2 dx − (2xy −3 + y )dy = 0. We ﬁnd that φ(x, y ) = xy −2 − y 2 /2, and hence, the general solution of our diﬀerential equation is xy −2 − y 2 /2 = c1 =⇒ 2x − y 4 = cy 2 . 22. Given x2 ydx + y (x3 + e−3y sin y )dy = 0, we have M = x2 y and N = y (x3 + e−3y sin y ). Thus My = x2 My − N x and Nx = 3x2 y , so = y −1 − 3 = g (y ) is a function of y alone so I (y ) = e g(y)dy = e3y /y is M an integrating factor for the given equation. Multiplying the given equation by I (y ) results in the exact x3 e3y equation x2 e3y dx + e3y (x3 + e−3y sin y )dy = 0. We ﬁnd that φ(x, y ) = − cos y , and hence, the general 3 3 3y xe solution of our diﬀerential equation is − cos y = c. 3 My − N x 23. Given (y − x2 )dx +2xdy = 0, we have M = y − x2 and N = 2x. Thus My = 1 and Nx = 2, so = N 1 1 − = f (x) is a function of x alone so I (x) = e f (x)dx = √ is an integrating factor for the given equation. 2x x Multiplying the given equation by I (x) results in the exact equation (x−1/2 y − x3/2 )dx +2x1/2 dy = 0. We ﬁnd 91 that φ(x, y ) = 2x1/2 y − or y (x) = c + 2x5/2 √. 10 x 2x5/2 2x5/2 , and hence, the general solution of our diﬀerential equation is 2x1/2 y − =c 5 5 24. Given xy [2 ln (xy ) + 1]dx + x2 dy = 0, we have M = xy [2 ln (xy ) + 1] and N + x2 . Thus My = MY − N x 1 3x + 2x ln (xy ) and Nx = 2x, so = y −1 = g (y ) is a function of y only so I (y ) = e g(y)dy = M y is an integrating factor for the given equation. Multiplying the given equation by I (y ) results in the exact equation x[2 ln (xy ) + 1]dx + x2 y −1 dy = 0. We ﬁnd that φ(x, y ) = x2 ln y + x2 ln x, and hence, the general 2 solution of our diﬀerential equation is x2 ln y + x2 ln x = c or y (x) = xec/x . 25. Given dy 2xy 1 + = =⇒ (2xy + 2x3 y − 1)dx + (1 + x2 )2 dy = 0, dx 1 + x2 (1 + x2 )2 we have M = 2xy + 2x3 y − 1 and N = (1 + x2 )2 . Thus My = 2x + 2x3 and Nx = 4x(1 + x2 ), so that 2x 1 My − N x =− = f (x) is a function of x alone so I (x) = e f (x)dx = is an integrating factor for N 1 + x2 1 + x2 1 the given equation. Multiplying the given equation by I (x) yields the exact equation 2xy − dx + 1 + x2 (1 + x2 )dy = 0. We ﬁnd that φ(x, y ) = (1 + x2 )y − tan−1 x, and hence, the general solution of our diﬀerential tan−1 x + c . equation is (1 + x2 )y − tan−1 x = c or y (x) = 1 + x2 26. Given (3xy − 2y −1 )dx + x(x + y −2 )dy = 0, we have M = 3xy − 2y −1 and N = x(x + y −2 ). Thus My − N x 1 My = 3x + 2y −2 and Nx = 2x + y −2 , so that = = f (x) is a function of x alone. Therefore, N x f (x)dx I (x) = e = x is an integrating factor for the given equation. Multiplying the given equation by I (x) results in the exact equation (3x2 y − 2xy −1 )dx + x2 (x + y −2 )dy = 0. We ﬁnd that φ(x, y ) = x3 y − x2 y −1 , and hence, the general solution of our diﬀerential equation is x3 y − x2 y −1 = c. 27. We are given (y −1 −x−1 )dx+(xy −2 −2y −1 )dy = 0 =⇒ xr y s (y −1 −x−1 )dx+xr y s (xy −2 −2y −1 )dy = 0 =⇒ (xr y s−1 −xr−1 y s )dx+(xr+1 y s−2 −2x Therefore, M = xr y s−1 − xr−1 y s and N = xr+1 y s−2 − 2xr y s−1 , so that My = xr (s − 1)y s−2 − xr−1 sy s−1 and Nx = (r + 1)xr y s−2 − 2rxr−1 y s−1 . The equation is exact if and only if My = Nx , which in turn requires that xr y s−1 − xr−1 y s = (r + 1)xr y s−2 − 2rxr−1 y s−1 =⇒ s−1 r+1 s−r−2 s 2r s − 2r = =⇒ . − − = y2 xy y2 xy y2 xy From the last equation, exactness requires that s − r − 2 = 0 and s − 2r = 0. Solving this system yields r = 2 and s = 4. 28. We are given y (5xy 2 + 4)dx + x(xy 2 − 1)dy = 0 =⇒ xr y s y (5xy 2 + 4)dx + xr y s x(xy 2 − 1)dy = 0. Therefore, M = xr y s+1 (5xy 2 + 4) and N = xr+1 y s (xy 2 − 1), so that My = 5(s + 3)xr+1 y s+2 + 4(s + 1)xr y s and Nx = (r + 2)xr+1 y s−2 − (r + 1)xr y s . The equation is exact if and only if My = Nx , which in turn 92 requires that 5(s +3)xr+1 y s+2 +4(s +1)xr y s = (r +2)xr+1 y s+2 − (r +1)xr y s =⇒ 5(s +3)xy 2 +4(s +1) = (r +2)xy 2 − (r +1). From the last equation, exactness requires that 5(s + 3) = r + 2 and 4(s + 1) = −(r + 1). Solving this system yields r = 3 and s = −2. 29. We are given 2y (y + 2x2 )dx + x(4y + 3x2 )dy = 0 =⇒ xr y s 2y (y + 2x2 )dx + xr y s x(4y + 3x2 )dy = 0. Therefore, M = 2xr y s+2 + 4xr+2 y s+1 and N = 4xr+1 y s+1 + 3xr+3 y s , so that My = 2xr (s + 2)y s+1 + 4xr+2 (s + 1)y s and Nx = 4(r + 1)xr y s+1 + 3(r + 3)xr+2 y s . The equation is exact if and only if My = Nx , which in turn requires that 2xr (s+2)y s+1 +4xr+2 (s+1)y s = 4(r+1)xr y s+1 +3(r+3)xr+2 y s =⇒ 2(s+2)y +4x2 (s+1) = 4(r+1)y +3(r+3)x2 . From this last equation, exactness requires that 2(s + 2) = 4(r + 1) and 4(s + 1) = 3(r + 3). Solving this system yields r = 1 and s = 2. My − N x = g (y ) is a function of y only. Dividing Equation (1.9.21) by M , it follows M that I is an integrating factor for M (x, y )dx + N (x, y )dy = 0 if and only if I is a solution of the diﬀerential equation N ∂I ∂I − = Ig (y ). M ∂x ∂y 30. Suppose that We must show that this diﬀerential equation has a solution I = I (y ). However, if I = I (y ), then the dI diﬀerential equation reduces to = −Ig (y ), which is a separable equation with solution I (y ) = e− g(t)dt . dy 31. dy + py = q can be written in the diﬀerential form as (py − q )dx + dy = 0. This has M = py − q dx x My − N x and N = 1, so that = p(x). Consequently, an integrating factor is I (x) = e p(t)dt . N (a) Note that x x yields the exact equation e p(t)dt (py − q )dx + x x ∂φ e p(t)dt dy = 0. Hence, there exists a potential function φ such that (a) = e p(t)dt (py − q ) and (b) ∂x x ∂φ p(t)dt =e . From (i), we have ∂y (b) Multiplying (py − q )dx + dy = 0 by I (x) = e x p(x)ye p(t)dt + dh(x) =e dx x p(t)dt (py − q ) =⇒ p(t)dt dh(x) = −q (x)e dx x p(t)dt x =⇒ h(x) = − q (x)e p(t)dt dx, where the constant of integration has been set to zero since we just need one potential function. Consequently, x φ(x, y ) = ye p(t)dt x − q (x)e p(t)dt dx, so that y (x) = I −1 x x Iq (t)dt + c , where I (x) = e p(t)dt . 93 Solutions to Section 1.10 True-False Review: 1. TRUE. This is well-illustrated by the calculations shown in Example 1.10.1. 2. TRUE. The equation y1 = y0 + f (x0 , y0 )(x1 − x0 ) dy dx is the tangent line to the curve = f (x, y ) at the point (x0 , y0 ). Once the point (x1 , y1 ) is determined, the procedure can be iterated over and over at the new points obtained to carry out Euler’s method. 3. FALSE. It is possible, depending on the circumstances, for the errors associated with Euler’s method to decrease from one step to the next. Problems: 1. Applying Euler’s method with y = 4y − 1, x0 = 0, y0 = 1, and h = 0.05, we have yn+1 = yn +0.05(4yn − 1). This generates the sequence of approximants given in the table below. n 1 2 3 4 5 6 7 8 9 10 xn 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 yn 1.15 1.33 1.546 1.805 2.116 2.489 2.937 3.475 4.120 4.894 Consequently the Euler approximation to y (0.5) is y10 = 4.894. (Actual value: y (.05) = 5.792 rounded to 3 decimal places). 2xy xn yn , x0 = 0, y0 = 1, and h = 0.1, we have yn+1 = yn − 0.2 . 2 1+x 1 + x2 n This generates the sequence of approximants given in the table below. 2. Applying Euler’s method with y = − n 1 2 3 4 5 6 7 8 9 10 xn 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 yn 1 0.980 0.942 0.891 0.829 0.763 0.696 0.610 0.569 0.512 94 Consequently the Euler approximation to y (1) is y10 = 0.512. (Actual value: y (1) = 0.5). 2 3. Applying Euler’s method with y = x − y 2 , x0 = 0, y0 = 2, and h = 0.05, we have yn+1 = yn +0.05(xn − yn ). This generates the sequence of approximants given in the table below. n 1 2 3 4 5 6 7 8 9 10 xn 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 yn 1.80 1.641 1.511 1.404 1.316 1.242 1.180 1.127 1.084 1.048 Consequently the Euler approximation to y (0.5) is y10 = 1.048. (Actual value: y (.05) = 1.0477 rounded to four decimal places). 4. Applying Euler’s method with y = −x2 y, x0 = 0, y0 = 1, and h = 0.2, we have yn+1 = yn − 0.2x2 yn . n This generates the sequence of approximants given in the table below. n 1 2 3 4 5 xn 0.2 0.4 0.6 0.8 1.0 yn 1 0.992 0.960 0.891 0.777 Consequently the Euler approximation to y (1) is y5 = 0.777. (Actual value: y (1) = 0.717 rounded to 3 decimal places). 2 5. Applying Euler’s method with y = 2xy 2 , x0 = 0, y0 = 1, and h = 0.1, we have yn+1 = yn + 0.1xn yn . This generates the sequence of approximants given in the table below. n 1 2 3 4 5 6 7 8 9 10 xn 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 yn 0.5 0.505 0.515 0.531 0.554 0.584 0.625 0.680 0.754 0.858 Consequently the Euler approximation to y (1) is y10 = 0.856. (Actual value: y (1) = 1). 95 ∗ 6. Applying the modiﬁed Euler method with y = 4y − 1, x0 = 0, y0 = 1, and h = 0.05, we have yn+1 = yn + 0.05(4yn − 1) ∗ yn+1 = yn + 0.025(4yn − 1 + 4yn+1 − 1). This generates the sequence of approximants given in the table below. n 1 2 3 4 5 6 7 8 9 10 xn 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 yn 1.165 1.3663 1.6119 1.9115 2.2770 2.7230 3.2670 3.9308 4.7406 5.7285 Consequently the modiﬁed Euler approximation to y (0.5) is y10 = 5.7285. (Actual value: y (.05) = 5.7918 rounded to 4 decimal places). 2xy ∗ 7. Applying the modiﬁed Euler method with y = − , x0 = 0, y0 = 1, and h = 0.1, we have yn+1 = 1 + x2 xn yn yn − 0.2 1 + x2 n ∗ xn+1 yn+1 xn yn −2 yn+1 = yn + 0.05 − . This generates the sequence of approximants given in the table 1 + x2 1 + x2 +1 n n below. n 1 2 3 4 5 6 7 8 9 10 xn 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 yn 0.9900 0.9616 0.9177 0.8625 0.8007 0.7163 0.6721 0.6108 0.5536 0.5012 Consequently the modiﬁed Euler approximation to y (1) is y10 = 0.5012. (Actual value: y (1) = 0.5). ∗ 8. Applying the modiﬁed Euler method with y = x − y 2 , x0 = 0, y0 = 2, and h = 0.05, we have yn+1 = 2 yn − 0.05(xn − yn ) 2 ∗ yn+1 = yn + 0.025(xn − yn + xn+1 − (yn+1 )2 ). This generates the sequence of approximants given in the table below. 96 n 1 2 3 4 5 6 7 8 9 10 xn 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 yn 1.8203 1.6725 1.5497 1.4468 1.3600 1.2866 1.2243 1.1715 1.1269 1.0895 Consequently the modiﬁed Euler approximation to y (0.5) is y10 = 1.0895. (Actual value: y (.05) = 1.0878 rounded to 4 decimal places). ∗ 9. Applying the modiﬁed Euler method with y = −x2 y, x0 = 0, y0 = 1, and h = 0.2, we have yn+1 = 2 yn − 0.2xn yn ∗ yn+1 = yn − 0.1[x2 yn + x2 +1 yn+1 ]. This generates the sequence of approximants given in the table below. n n n 1 2 3 4 5 xn 0.2 0.4 0.6 0.8 1.0 yn 0.9960 0.9762 0.9266 0.8382 0.7114 Consequently the modiﬁed Euler approximation to y (1) is y5 = 0.7114. (Actual value: y (1) = 0.7165 rounded to 4 decimal places). ∗ 10. Applying the modiﬁed Euler method with y = 2xy 2 , x0 = 0, y0 = 1, and h = 0.1, we have yn+1 = 2 yn + 0.1xn yn ∗ 2 yn+1 = yn +0.05[xn yn + xn+1 (yn+1 )2 ]. This generates the sequence of approximants given in the table below. n 1 2 3 4 5 6 7 8 9 10 xn 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 yn 0.5025 0.5102 0.5235 0.5434 0.5713 0.6095 0.6617 0.7342 0.8379 0.9941 Consequently the modiﬁed Euler approximation to y (1) is y10 = 0.9941. (Actual value: y (1) = 1). 11. We have y = 4y − 1, x0 = 0, y0 = 1, and h = 0.05. So, 1 1 1 k1 = 0.05(4yn − 1), k2 = 0.05[4(yn + k1 ) − 1], k3 = 0.05[4(yn + k2 ) − 1], k4 = 0.05[4(yn + k3 ) − 1]. 2 2 2 97 Using yn+1 = yn + 1 (k1 + k2 + k3 + k4 ), we generate the sequence of approximants given in the table below 6 (computations rounded to ﬁve decimal places). n 1 2 3 4 5 6 7 8 9 10 xn 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 yn 1.16605 1.36886 1.61658 1.91914 2.28868 2.74005 3.29135 3.96471 4.78714 5.79167 Consequently the Runge-Kutta approximation to y (0.5) is y10 = 5.79167. (Actual value: y (.05) = 5.79179 rounded to 5 decimal places). 12. We have y = −2 k1 = −0.2 xy , x0 = 0, y0 = 1, and h = 0.1. So, 1 + x2 (xn + 0.05)(yn + k1 ) (xn + 0.05)(yn + k2 ) xn+1 (yn + k3 ) xn yn 2 2 , k2 = −0.2 ), k3 = −0.2 , k4 = −0.2 . 1 + x2 [1 + (xn + 0.05)2 ] [1 + (xn + 0.05)2 ] [1 + (xn+1 )2 ] n 1 Using yn+1 = yn + 6 (k1 + k2 + k3 + k4 ), we generate the sequence of approximants given in the table below (computations rounded to seven decimal places). n 1 2 3 4 5 6 7 8 9 10 xn 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 yn 0.9900990 0.9615383 0.9174309 0.8620686 0.7999996 0.7352937 0.6711406 0.6097558 0.5524860 0.4999999 Consequently the Runge-Kutta approximation to y (1) is y10 = 0.4999999. (Actual value: y (.05) = 0.5). 13. We have y = x − y 2 , x0 = 0, y0 = 2, and h = 0.05. So, 2 k1 = 0.05(xn −yn ), k2 = 0.05[xn +0.025−(yn + k1 2 k2 ) ], k3 = 0.05[xn +0.025−(yn + )2 ], k4 = 0.05[xn+1 −(yn +k3 )2 ]]. 2 2 1 Using yn+1 = yn + 6 (k1 + k2 + k3 + k4 ), we generate the sequence of approximants given in the table below (computations rounded to six decimal places). 98 n 1 2 3 4 5 6 7 8 9 10 xn 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 yn 1.1.81936 1.671135 1.548079 1.445025 1.358189 1.284738 1.222501 1.169789 1.125263 1.087845 Consequently the Runge-Kutta approximation to y (0.5) is y10 = 1.087845. (Actual value: y (0.5) = 1.087845 rounded to 6 decimal places). 14. We have y = −x2 y, x0 = 0, y0 = 1, and h = 0.2. So, k1 = −0.2x2 yn , k2 = −0.2(xn +0.1)2 (yn + n k2 k1 ), k3 = −0.2(xn +0.1)2 (yn + ), k4 = −0.2(xn+1 )2 (yn + k3 ). 2 2 1 Using yn+1 = yn + 6 (k1 + k2 + k3 + k4 ), we generate the sequence of approximants given in the table below (computations rounded to six decimal places). n 1 2 3 4 5 xn 0.2 0.4 0.6 0.8 1.0 yn 0.997337 0.978892 0.930530 0.843102 0.716530 Consequently the Runge-Kutta approximation to y (1) is y10 = 0.716530. (Actual value: y (1) = 0.716531 rounded to 6 decimal places). 15. We have y = 2xy 2 , x0 = 0, y0 = 1, and h = 0.1. So, 2 k1 = 0.2xn − yn , k2 = 0.2(xn + 0.05)(yn + k1 2 k2 ) , k3 = 0.2(xn + 0.05)(yn + )2 , k4 = 0.2xn+1 (yn + k3 )2 . 2 2 1 Using yn+1 = yn + 6 (k1 + k2 + k3 + k4 ), we generate the sequence of approximants given in the table below (computations rounded to six decimal places). n 1 2 3 4 5 6 7 8 9 10 xn 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 yn 0.502513 0.510204 0.523560 0.543478 0.571429 0.609756 0.662252 0.735295 0.840336 0.999996 99 Consequently the Runge-Kutta approximation to y (1) is y10 = 0.999996. (Actual value: y (1) = 1). 16. We have y + 1 y = e−x/10 cos x, x0 = 0, y0 = 0, and h = 0.5 Hence, 10 1 yn + exn /10 cos xn , 10 1 1 k2 = 0.5 − yn + k1 + e(−xn +0.25)/10 cos (xn + 0.25) , 10 2 1 1 k3 = 0.5 − yn + k2 + e(−xn +0.25)/10 cos (xn + 0.25) , 10 2 1 k4 = 0.5 − (yn + k3 ) + e−xn+1 /10 cos xn+1 . 10 k1 = 0.5 − Using 1 yn+1 = yn + (k1 + 2k2 + 2k3 + k4 ), 6 we generate the sequence of approximants plotted in the accompanying ﬁgure. We see that the solution appears to be oscillating with a diminishing amplitude. Indeed, the exact solution to the initial value problem is y (x) = e−x/10 sin x. The corresponding solution curve is also given in the ﬁgure. y(x) 0.75 0.5 0.25 x 5 10 15 20 -0.25 -0.5 Figure 56: Figure for Problem 16 Solutions to Section 1.11 Problems: 25 100 2 dy d2 y dy du d2 y = . Substituting these + 4x2 . Let u = , so that = 2 dx x dx dx dx dx2 du 2 du 2 results into the ﬁrst equation yields = u + 4x2 =⇒ − u = 4x2 . An appropriate integrating factor dx x dx x for this equation is dx −2 x = x−2 . I (x) = e 1. Rewriting the equation, we have Therefore, d(x−2 u) = 4 =⇒ x−2 u = 4 dx =⇒ x−2 u = 4x + c1 =⇒ u = 4x3 + c1 x2 dy = 4x3 + c1 x2 =⇒ dx =⇒ y (x) = c2 x3 + x4 + c3 . dy d2 y 1 dy du d2 y = − 1 . Let u = , so that = . dx2 (x − 1)(x − 2) dx dx dx dx2 du 1 Substituting these results into the ﬁrst equation yields = (u − 1), or dx (x − 1)(x − 2) 2. Rewriting the equation, we have 1 1 du − u=− . dx (x − 1)(x − 2) (x − 1)(x − 2) An appropriate integrating factor for this equation is − I (x) = e 1 dx (x − 1)(x − 2) = x − 1 . x−2 Therefore, d dx x−1 u x−2 =− 1 x−1 1 =⇒ u = − (x − 2)−2 dx = +c (x − 2)2 x−2 x−2 1 x−2 =⇒ u = + c1 x−1 x−1 dy 1 x−2 1 c1 =⇒ = + c1 = + c1 − dx x−1 x−1 x−1 x−1 =⇒ y (x) = ln |x − 1| + c1 x − c1 ln |x − 1| + c2 . 2 d2 y + dx2 y 2 dy dy du du d2 y . Let u = , so that =u = . dx dx dx dy dx2 2 du Substituting these results into the ﬁrst equation yields u + u2 = u, which implies that either u = 0 or dy y du 2 du + u = 1. If u = 0, then = 0, which implies that y is constant. Therefore, a constant function is a dy y dx solution to the equation. Turning to the second possibility, an appropriate integrating factor for the latter 3. Rewriting the equation, we have dy dx = 101 equation is I (y ) = e 2 y dy = y 2 . Therefore, d2 (y u) = y 2 =⇒ y 2 u = dy y 2 dy y3 + c1 3 dy y c1 =⇒ = + 2. dx 3y =⇒ y 2 u = This is a Bernoulli equation, which can be solved by previous techniques to yield √ ln |y 3 + c2 | = x + c3 =⇒ y (x) = 3 c4 ex + c5 . Note that the ﬁrst possibility (y constant) is included here in the case when c4 = 0. du = u tan y =⇒ dy du = u d2 y = dx2 2 du du d2 y dy , so that =u = . dx dx dy dx2 du du = u2 tan y . If u = 0 then = 0 =⇒ y equals a Substituting these results into the ﬁrst equation yields u dy dx constant and this is a solution to the equation. Now suppose that u = 0. Then 4. Rewriting the equation, we have dy dx tan y . Let u = tan ydy =⇒ u = c1 sec y =⇒ dy = c1 sec y =⇒ y (x) = sin−1 (c1 x + c2 ). dx 2 d2 y dy dy du d2 y dy + tan x = , so that = 2 . Substituting . Let u = 2 dx dx dx dx dx dx du + tan xu = u2 , which is a Bernoulli equation. Letting z = u−1 these results into the ﬁrst equation yields dx 1 dz dz du gives 2 = = − . Substituting these results into the last equation yields − tan xz = −1. Then an u dx dx dx integrating factor for this equation is I (x) = e− tan xdx = cos x. Therefore, 5. Rewriting the equation, we have d (z cos x) = − cos x =⇒ z cos x = − cos xdx dx − sin x + c1 =⇒ z = cos x cos x =⇒ u = c1 − sin x dy cos x =⇒ = dx c1 − sin x =⇒ y (x) = c2 − ln |c1 − sin x|. 2 d2 x dx dx dx du d2 x = + 2 . Let u = , so that = 2 . Substituting dt2 dt dt dt dt dt du du these results into the ﬁrst equation yields = u2 + 2u =⇒ − 2u = u2 , which is a Bernoulli equation. If dt dt u = 0, then x is a constant function. Such a function satisﬁes the given diﬀerential equation. Now suppose 6. Rewriting the equation, we have 102 dz 1 du that u = 0. Let z = u−1 , so that =− 2 . Substituting these results into the last equation yields dt u dt dz + 2z = −1. An integrating factor for this equation is I (t) = e2t . Therefore, dt d 2t 1 (e z ) = −e2t =⇒ z = ce−2t − dt 2 2e2t =⇒ u = 2c − e2t 2e2t =⇒ x(t) = dt 2c − e2t =⇒ x(t) = c2 − ln |c1 − e2t |. dy du d2 y d2 y 2 dy −· = 6x4 . Let u = , so that = 2 . Substituting these 2 dx x dx dx dx dx du 2 4 results into the ﬁrst equation yields − u = 6x . An appropriate integrating factor for this equation is dx x dx −2 x = x−2 . Therefore, I (x) = e 7. Rewriting the equation, we have d −2 (x u) = 6x2 =⇒ x−2 u = 6 x2 dx dx =⇒ u = 2x5 + cx2 dy =⇒ = 2x5 + cx2 dx 1 =⇒ y (x) = x6 + c1 x3 + c2 . 3 d2 x dx dx du d2 x =2 t+ . Let u = , so that = 2 . Substituting these 2 dt dt dt dt dt du 2 results into the ﬁrst equation yields − u = 2. An integrating factor for this equation is I (t) = t−2 . dt t Therefore, d −2 (t u) = 2t−2 =⇒ u = −2t + ct2 dt dx =⇒ = −2t + ct2 dt =⇒ x(t) = c1 t3 − t2 + c2 . 8. Rewriting the equation, we have t d2 y −α dx2 2 dy dy dy du d2 y −β = 0. Let u = , so that = 2 . Substituting dx dx dx dx dx du 2 − βu = αu , which is a Bernoulli equation. If u = 0 then y these results into the ﬁrst equation yields dx is a constant function, and such a function satisﬁes the diﬀerential equation. Now suppose that u = 0. Let dz du dz z = u−1 so that = −u−2 . Substituting these results into the last equation yields + βz = −α. An dx dx dx 9. Rewriting the equation, we have 103 integrating factor for this equation is I (x) = eβ eβx z = −α dx = eβx . Therefore, α + ce−βx β βeβx =⇒ u = cβ − αeβx dy βeβx =⇒ = dx cβ − αeβx β eβx =⇒ y (x) = dx cβ − αeβx 1 =⇒ y (x) = − ln |c1 + c2 eβx |. α eβx dx =⇒ z = − 2 dy dy du d2 y d2 y − = 18x4 . Let u = , so that = . Substituting dx2 x dx dx dx dx2 du 2 these results into the ﬁrst equation yields − u = 18x4 , which is a ﬁrst-order linear diﬀerential equation. dx x An integrating factor for this equation is I (x) = x−2 . Therefore, 10. Rewriting the equation, we have d −2 (x u) = 18x2 =⇒ u = 6x5 + cx2 dx dy = 6x5 + cx2 =⇒ dx =⇒ y (x) = x6 + c1 x3 + c2 . d2 y 2x dy dy du d2 y =− . Let u = , so that = . If u = 0, then 2 2 dx dx 1+x dx dx dx2 y is a constant function, and such a function satisﬁes the diﬀerential equation. Now suppose that u = 0. du 2x Substituting these results into the ﬁrst equation yields =− u, a separable diﬀerential equation. dx 1 + x2 Separating variables and integrating both sides, we obtain 11. Rewriting the equation, we have c1 1 + x2 dy c1 =⇒ = dx 1 + x2 =⇒ y (x) = c1 tan−1 x + c2 . ln |u| = − ln (1 + x2 ) + c =⇒ u = d2 y 1 + dx2 y 2 3 dy dy du du d2 y . Let u = , so that =u = 2. dx dx dx dy dx du 1 2 −y 3 Substituting these results into the ﬁrst equation yields u + u = ye u . If u = 0, then y is a constant dy y function, and such a function satisﬁes the diﬀerential equation. Now suppose that u = 0. Dividing the du u diﬀerential equation for u in terms of y by u gives + = ye−y u2 , which is a Bernoulli equation. Let dy y dv du v = u−1 so that = −u−2 . Substituting these results into the last equation in the usual manner for a dy dy 12. Rewriting the equation, we have dy dx = ye−y 104 dv v − = −ye−y , a ﬁrst-order linear diﬀerential equation for v as a function of y . dy y An integrating factor for this diﬀerential equation is I (y ) = y −1 . Therefore, Bernoulli equation yields d −1 (y v ) = −e−y =⇒ v = y (e−y + c) dy ey =⇒ u = y + cyey dy ey =⇒ = dx y + cyey −y =⇒ (ye + cy )dy = dx =⇒ e−y (y + 1) + c1 y 2 − x. d2 y dy dy du du d2 y − tan x = 1. Let u = , so that =u = . 2 dx dx dx dx dy dx2 du Substituting these results into the ﬁrst equation yields − u tan x = 1. An appropriate integrating factor dx for this equation is I (x) = e− tan xdx = cos x. Therefore, 13. Rewriting the equation, we have d (u cos x) = cos x =⇒ u cos x = sin x + c dx =⇒ u(x) = tan x + c sec x dy = tan x + c sec x =⇒ dx =⇒ y (x) = ln sec x + c1 ln (sec x + tan x) + c2 . 14. Rewriting the equation, we have y d2 y =2 dx2 dy dx 2 + y 2 . Let u = dy du du d2 y , so that =u = . dx dx dy dx2 2 du − u2 = y , a Bernoulli equation. Let z = u2 dy y du 1 dz dz 4 so that u = . Substituting these results into the last equation yields − z = 2y , which has an dy 2 dy dy y integrating factor I (y ) = y −4 . Therefore, Substituting these results into the ﬁrst equation yields u d −4 (y z ) = 2y −3 =⇒ z = c1 y 4 − y 2 dy =⇒ u2 = c1 y 4 − y 2 =⇒ u = ± c1 y 4 − y 2 dy =⇒ = ± c1 y 4 − y 2 dx 1 =⇒ cos−1 = ±x + c2 . √ y c1 Using the facts that y (0) = 1 and y (0) = 0, we ﬁnd that c1 = 1 and c2 = 0. Thus, y (x) = sec x. 105 dy d2 y du du d2 y = ω 2 y , where ω > 0. Let u = . Substituting these results , so that =u = 2 dx dx dx dy dx2 du into the ﬁrst equation yields u = ω 2 y =⇒ u2 = ω 2 y 2 + c2 . Using the given that y (0) = a and y (0) = 0, dy we ﬁnd that c2 = −a2 ω 2 . Then dy = ±ω y 2 − a2 , dx which is a separable diﬀerential equation. Separating variables, we obtain 15. We have 1 cosh−1 (y/a) = ±x + c =⇒ y (x) = a cosh [ω (c ± x)] ω =⇒ y (x) = ±aω sinh [ω (c ± x)], and since y (0) = 0, c = 0. Hence, y (x) = a cosh (ωx). du d2 y dy , so that u = . Substituting these results into the diﬀerential equation yields 16. Let u = dx dx dx2 √ du 1 1√ u 1 + u2 . Separating the variables and integrating we obtain 1 + u2 = y + c. Imposing the initial = dy a a √ 1 1 dy 2= y so that 1 + u2 = 2 y 2 or equivalently, conditions y (0) = a, (0) = 0 gives c = 0. Hence, 1 + u dx a a dy 1 1 u = ± y 2 /a2 − 1. Substituting u = and separating the variables gives = ± dx which 2 − a2 dx |a| y can be integrated to obtain cosh−1 (y/a) = ±x/a + c1 so that y = a cosh (±x/a + c1 ). Imposing the initial condition y (0) = a gives c1 = 0 so that y (x) = a cosh (x/a). dy du d2 y , so that = 2 . Substituting these results into the ﬁrst equation gives us the equivalent dx dx dx du + p(x)u = q (x). This has a solution diﬀerential equation dx 17. Let u = u(x) = e− p(x)dx e− p(x)dx q (x)dx + c1 , so that dy = e− dx p(x)dx e− p(x)dx q (x)dx + c1 . Thus y (x) = e− p(x)dx e− p(x)dx q (x)dx + c1 dx + c2 is a solution to the original diﬀerential equation. 18. (a) We have u1 = y =⇒ u2 = Thus, d3 y =F dx3 x, du1 dy du2 d2 y du3 d3 y = =⇒ u3 = = 2 =⇒ = 3. dx dx dx dx dx dx d2 y du3 , since the latter equation is equivalent to = F (x, u3 ). dx2 dx 106 (b) We can replace 1 d3 y = 3 dx x d2 y −1 dx2 by the equivalent ﬁrst order system: du1 = u2 , dx du2 = u3 , dx and du3 1 = (u3 − 1). dx x Therefore, du3 = u3 − 1 dx =⇒ u3 = Kx + 1 x du2 =⇒ = Kx + 1 dx K =⇒ u2 = x2 + x + c2 2 du1 K =⇒ = x2 + x + c2 dx 2 K3 12 =⇒ u1 = x + x + c2 x + c3 6 2 1 =⇒ y (x) = u1 = c1 x3 + x2 + c2 x + c3 . 2 19. (a) Let u = dθ so that dt du d2 θ du dθ du = 2= =u . dt dt dθ dt dθ du g Substituting these results into the given linear diﬀerential equation yields u + θ = 0, and integrating dθ L g this with respect to θ gives u2 = − θ2 + c1 , for some constant c1 . But we are given the initial conditions L dθ g2 (0) = 0 and θ(0) = θ0 , from which it follows that c1 = θ0 . Therefore, dt L u2 = g2 (θ − θ2 ) =⇒ u = ± L0 =⇒ sin−1 . However, since θ(0) = θ0 , we ﬁnd that c2 = sin−1 θ θ0 = g 2 θ0 − θ 2 L θ g =± t + c2 θ0 L π . Thus, 2 π ± 2 g t =⇒ θ = θ0 sin L =⇒ θ = θ0 cos Yes, the predicted motion is reasonable. dθ (b) Let u = so that dt du d2 θ du dθ du = 2= =u . dt dt dθ dt dθ π ± 2 g t L g t. L 107 du g Substituting these results into (1.11.28) yields u + sin θ = 0, and integrating this with respect to θ gives dθ L 2g dθ 2g u2 = cos θ + c, for some constant c. Since θ(0) = θ0 and (0) = 0, then c = − cos θ0 . Therefore, L dt L 2g dθ 2g cos θ − cos θ0 =⇒ =± L L dt dθ =⇒ =± dt u2 = 2g 2g cos θ − cos θ0 L L 2g [cos θ − cos θ0 ]1/2 . L L dθ = ±dt. When the pendulum goes from θ = θ0 to θ = 0 (which 2g [cos θ − cos θ0 ]1/2 dθ corresponds to one quarter of a period) is negative; hence, choose the negative sign. Thus, dt (c) From part (b), (d) From T = L 2g T= Let k = sin θ0 2 θ0 L 2g 0 0 L 2g T =− θ0 dθ =⇒ T = [cos θ − cos θ0 ]1/2 dθ . [cos θ − cos θ0 ]1/2 0 dθ , we have [cos θ − cos θ0 ]1/2 θ0 0 θ0 L 2g dθ 2 2 sin θ0 2 2 − 2 sin 1/2 θ 2 = 1 2 L 2g θ0 dθ 0 θ0 2 2 sin 2 − sin θ 2 1 /2 . so that T= 1 2 L 2g θ0 0 dθ k2 2 − sin θ 2 1/2 . We now make a change of variables in this equation as follows: Let sin θ/2 = k sin u. When θ = 0, u = 0, and when θ = θ0 , u = π/2. Moreover, dθ = 2k 1 − sin2 (u) 2k cos (u)du =⇒ dθ = du cos (θ/2) 1 − sin2 (θ/2) =⇒ dθ = 2 k 2 − (k sin (u))2 du 1 − k 2 sin2 (u) =⇒ dθ = 2 k 2 − sin2 (θ/2) du. 1 − k 2 sin2 (u) Making this change of variables in the equation above, we obtain T= L g π /2 0 du 1 − k 2 sin2 (u) , 108 where k is deﬁned as above. Solutions to Section 1.12 Problems: 1. The acceleration of gravity is a = 32 ft/sec2 . Integrating, we ﬁnd that the vertical component of the velocity of the ball is v (t) = 16t + c1 . Since the ball is initially hit horizontally, we have v (0) = 0, so that c1 = 0. Hence, v (t) = 16t. Integrating again, we ﬁnd the position s(t) = 8t2 + c2 . Setting s = 0 at two feet above the ground, we have s(0) = 0 so that c2 = 0. Thus, s(t) = 8t2 . The ball hits the ground when s(t) = 2, 1 so that t2 = 4 . Therefore, t = 1 . Since 80 miles per hour equates to over 117 ft/sec. In one-half second, the 2 horizontal change in position of the ball is therefore more than 117 = 58.5 feet, more than enough to span 2 the necessary 40 feet for the ball to reach the front wall. Therefore, the ball does reach the front wall before hitting the ground. 2. The acceleration of gravity is a = 9.8 meters/sec2 . Integrating, we ﬁnd that the vertical component of the velocity of the rocket is v (t) = 4.9t + c1 . We are given that v (0) = −10, so that c1 = −10. Thus, v (t) = 4.9t − 10. Integrating again, we ﬁnd the position s(t) = 2.495t2 − 10t + c2 . Setting s = 0 at two meters above the ground, we have s(0) = 0 so that s(t) = 2.495t2 − 10t. 10 (a) The highest point above the ground is obtained when v (t) = 0. That is, t = 4.9 ≈ 2.04 seconds. Thus, 2 the highest point is approximately s(2.04) = 2.495 · (2.04) − 10(2.04) ≈ −10.02, which is 12.02 meters above the ground. (b) The rocket hits the ground when s(t) = 2. That is 2.495t2 − 10t − 2 = 0. Solving for t with the quadratic formula, we ﬁnd that t = −0.19 or t = 4.27. Since we must report a positive answer, we conclude that the rocket hits the ground 4.27 seconds after launch. 3. We ﬁrst determine the slope of the given family at the point (x, y ). Diﬀerentiating y = cx3 with respect y dy to x yields dx = 3cx2 . We substitute c = x3 into the latter equation to obtain 3y dy = . dx x Consequently, the diﬀerential equation for the orthogonal trajectories is dy x =− . dx 3y Separating the variables and integrating gives 32 1 y = − x2 + C, 2 2 which can be written in the equivalent form x2 + 3y 2 = k. 4. We ﬁrst determine the slope of the given family at the point (x, y ). Diﬀerentiating y 2 = cx3 with respect dy dy y2 cx2 to x yields 2y dx = 3cx2 , so that dx = 32y . Substituting c = x3 into this latter equation yields dy 3y = . dx 2x 109 Consequently, the diﬀerential equation for the orthogonal trajectories is dy 2x =− . dx 3y Separating the variables and integrating gives 32 y = −x2 + C, 2 which can be written in the equivalent form 2x2 + 3y 2 = k. 5. We ﬁrst determine the slope of the given family at the point (x, y ). Diﬀerentiating y = ln(cx) with respect dy dy 1 to x yields dx = x . Consequently, the diﬀerential equation for the orthogonal trajectories is dx = −x. This can be integrated directly to obtain 1 y = − x2 + k. 2 6. We ﬁrst determine the slope of the given family at the point (x, y ). Diﬀerentiating x4 + y 4 = c with 3 dy dy respect to x yields 4x3 + 4y 3 dx = 0. Therefore, dx = − x3 . Consequently, the diﬀerential equation for the y orthogonal trajectories is dy y3 = 3. dx x Separating the variables and integrating gives 1 1 − y −2 = − x−2 + C, 2 2 which can be written in the equivalent form y 2 − x2 = kx2 y 2 . 7. (a) We ﬁrst determine the slope of the given family at the point (x, y ). Diﬀerentiating x2 + 3y 2 = 2cy with 2 2 dy dy dy x respect to x yields 2x + 6y dx = 2c dx , so that dx = c−3y . Substituting c = x +3y into the latter equation 2y yields dy dx = x x2 +3y 2 2y −3y = 2xy x2 −3y 2 , as required. (b) It follows that the diﬀerential equation for the orthogonal trajectories is dy 3y 2 − x2 = . dx 2xy This diﬀerential equation is ﬁrst-order homogeneous. Substituting y = xV into the preceding diﬀerential equation gives dV 3V 2 − 1 x +V = dx 2V 110 which simpliﬁes to dV V2−1 = . dx 2V Separating the variables and integrating, we obtain ln(V 2 − 1) = ln x + C, or, upon exponentiation, V 2 − 1 = y2 kx. Inserting V = y/x into the preceding equation yields x2 − 1 = kx. That is, y 2 − x2 = kx3 . 8. See accompanying ﬁgure. y(x) x Figure 57: Figure for Problem 8 9. See accompanying ﬁgure. y(x) x Figure 58: Figure for Problem 9 111 10. (a) If v (t) = 25, then dv 1 = 0 = (25 − v ). dt 2 (b) The accompanying ﬁgure suggests that lim v (t) = 25. t→∞ v(t) 25 20 15 10 5 t 10 5 Figure 59: Figure for Problem 10 11. (a) Separating the variables in Equation (1.12.6) yields mv dv =1 mg − kv 2 dy which can be integrated to obtain m ln(mg − kv 2 ) = y + c. 2k Multiplying both sides of this equation by −1 and exponentiating gives − 2k mg − kv 2 = c1 e− m y . The initial condition v (0) = 0 requires that c1 = mg , which, when inserted into the preceding equation yields 2k mg − kv 2 = mge− m y , 112 or equivalently, v2 = 2k mg 1 − e− m y , k as required. (b) See accompanying ﬁgure. v2(y) mg/k y Figure 60: Figure for Problem 11 12. The given diﬀerential equation is separable. Separating the variables gives y ln x dy =2 , dx x which can be integrated directly to obtain 12 y = (ln x)2 + c, 2 or, equivalently, y 2 = 2(ln x)2 + c1 . 13. The given diﬀerential equation is ﬁrst-order linear. We ﬁrst divide by x to put the diﬀerential equation in standard form: dy 2 − y = 2x ln x. (0.0.1) dx x An integrating factor for this equation is I = e (−2/x)dx = x−2 . Multiplying Equation (0.0.1) by x−2 reduces it to d −2 (x y ) = 2x−1 ln x, dx 113 which can be integrated to obtain x−2 y = (ln x)2 + c so that y (x) = x2 [(ln x)2 + c]. 14. We ﬁrst re-write the given diﬀerential equation in the diﬀerential form 2xy dx + (x2 + 2y )dy = 0. Then My = 2x = Nx so that the diﬀerential equation is exact. Consequently, there exists a potential function φ satisfying ∂φ = 2xy, ∂x ∂φ = x2 + 2y. ∂y Integrating these two equations in the usual manner yields φ(x, y ) = x2 y + y 2 . Therefore Equation (0.0.2) can be written in the equivalent form d(x2 y + y 2 ) = 0 with general solution x2 y + y 2 = c. 15. We ﬁrst rewrite the given diﬀerential equation as dy y 2 + 3xy + x2 = , dx x2 which is ﬁrst order homogeneous. Substituting y = xV into the preceding equation yields x so that x dV + V = V 2 + 3V + 1 dx dV = V 2 + 2V + 1 = (V + 1)2 , dx or, in separable form, 1 dV 1 =. (V + 1)2 dx x This equation can be integrated to obtain −(V + 1)−1 = ln x + c so that V +1= 1 . c1 − ln x (0.0.2) 114 Inserting V = y/x into the preceding equation yields y 1 +1= , x c1 − ln x so that y (x) = x − x. c1 − ln x 16. We ﬁrst rewrite the given diﬀerential equation in the equivalent form dy + y · tan x = y 2 sin x, dx which is a Bernoulli equation. Dividing this equation by y 2 yields y −2 dy + y −1 tan x = sin x. dx Now make the change of variables u = y −1 in which case Equation (0.0.3) gives the linear diﬀerential equation − du dx (0.0.3) dy = −y −2 dx . Substituting these results into du + u · tan x = sin x dx or, in standard form, du − u · tan x = − sin x. dx (0.0.4) An integrating factor for this diﬀerential equation is I = e− tan x dx = cos x. Multiplying Equation (0.0.4) by cos x reduces it to d (u · cos x) = − sin x cos x dx which can be integrated directly to obtain u · cos x = so that 1 cos2 x + c, 2 cos2 x + c1 . cos x into the preceding equation and rearranging yields u= Inserting u = y −1 y (x) = 2 cos x . cos2 x + c1 17. The given diﬀerential equation is linear with integrating factor I (x) = e 2e2x 1+e2x dx = eln(1+e 2x ) = 1 + e2x . Multiplying the given diﬀerential equation by 1 + e2x yields d e2x + 1 2e2x (1 + e2x )y = 2x = −1 + 2x dx e −1 e −1 115 which can be integrated directly to obtain (1 + e2x )y = −x + ln |e2x − 1| + c, so that y (x) = −x + ln |e2x − 1| + c . 1 + e2x 18. We ﬁrst rewrite the given diﬀerential equation in the equivalent form dy y+ = dx x2 − y 2 , x which we recognize as being ﬁrst order homogeneous. Inserting y = xV into the preceding equation yields x dV |x| +V =V + dx x 1 − V 2, that is, √ 1 dV 1 =± . x 1 − V 2 dx Integrating we obtain sin−1 V = ± ln |x| + c, so that V = sin(c ± ln |x|). Inserting V = y/x into the preceding equation yields y (x) = x sin(c ± ln |x|). 19. We ﬁrst rewrite the given diﬀerential equation in the equivalent form (sin y + y cos x + 1)dx − (1 − x cos y − sin x)dy = 0. Then My = cos y + cos x = Nx so that the diﬀerential equation is exact. Consequently, there is a potential function satisfying ∂φ = sin y + y cos x + 1, ∂x ∂φ = −(1 − x cos y − sin x). ∂y Integrating these two equations in the usual manner yields φ(x, y ) = x − y + x sin y + y sin x, so that the diﬀerential equation can be written as d(x − y + x sin y + y sin x) = 0, and therefore has general solution x − y + x sin y + y sin x = c. 116 20. Writing the given diﬀerential equation as dy 1 25 −1 2 + y= y x ln x, dx x 2 we see that it is a Bernoulli equation with n = −1. We therefore divide the equation by y −1 to obtain y 1 dy 25 2 + y2 = x ln x. dx x 2 dy We now make the change of variables u = y 2 , in which case, du = 2y dx . Inserting these results into the dx preceding diﬀerential equation yields 1 du 1 25 2 + u= x ln x, 2 dx x 2 or, in standard form, du 2 + u = 25x2 ln x. dx x An integrating factor for this linear diﬀerential equation is I = e diﬀerential equation by x2 reduces it to (2/x)dx d2 (x u) = 25x4 ln x dx which can be integrated directly to obtain 1 15 x ln x − x5 5 25 x2 u = 25 +c so that u = x3 (5 ln x − 1) + cx−2 . Making the replacement u = y 2 in this equation gives y 2 = x3 (5 ln x − 1) + cx−2 . 21. The given diﬀerential equation can be written in the equivalent form ex−y dy = 2x+y = e−x e−2y , dx e which we recognize as being separable. Separating the variables gives e2y dy = e−x dx which can be integrated to obtain 1 2y e = −e−x + c 2 so that y (x) = 1 ln(c1 − 2e−x ). 2 = x2 . Multiplying the previous 117 22. The given diﬀerential equation is linear with integrating factor I = e given diﬀerential equation by sin x reduces it to cot x dx = sin x. Multiplying the sin x d (y sin x) = dx cos x which can be integrated directly to obtain y sin x = − ln(cos x) + c, so that y (x) = c − ln(cos x) . sin x 23. Writing the given diﬀerential equation as 1 2ex dy + y = 2y 2 e−x , dx 1 + ex 1 we see that it is a Bernoulli equation with n = 1/2. We therefore divide the equation by y 2 to obtain 1 y− 2 2ex 1 dy + y 2 = 2e−x . dx 1 + ex 1 1 dy 1 We now make the change of variables u = y 2 , in which case, du = 2 y − 2 dx . Inserting these results into the dx preceding diﬀerential equation yields du 2ex 2 + u = 2e−x , dx 1 + ex or, in standard form, du ex + u = e−x . dx 1 + ex An integrating factor for this linear diﬀerential equation is I=e ex 1+ex dx = eln(1+e x ) = 1 + ex . Multiplying the previous diﬀerential equation by 1 + ex reduces it to d [(1 + ex )u] = e−x (1 + ex ) = e−x + 1 dx which can be integrated directly to obtain (1 + ex )u = −e−x + x + c so that u= x − e−x + c . 1 + ex 1 Making the replacement u = y 2 in this equation gives 1 y2 = x − e−x + c . 1 + ex 118 24. We ﬁrst rewrite the given diﬀerential equation in the equivalent form y dy y ln +1 . = dx x x The function appearing on the right of this equation is homogeneous of degree zero, and therefore the diﬀerential equation itself is ﬁrst order homogeneous. We therefore insert y = xV into the diﬀerential equation to obtain dV x + V = V (ln V + 1), x so that dV x = V ln V. dx Separating the variables yields 1 1 dV = V ln V dx x which can be integrated to obtain ln(ln V ) = ln x + c. Exponentiating both side of this equation gives ln V = c1 x, or equivalently, V = ec1 x . Inserting V = y/x in the preceding equation yields y = xec1 x . 25. For the given diﬀerential equation we have M (x, y ) = 1 + 2xey and N (x, y ) = −(ey + x), so that 1 + 2xey My − N x = = 1. M 1 + 2xey Consequently, an integrating factor for the given diﬀerential equation is I = e− dy = e−y . Multiplying the given diﬀerential equation by e−y yields the exact diﬀerential equation (2x + e−y )dx − (1 + xe−y )dy = 0. Therefore, there exists a potential function φ satisfying ∂φ = 2x + e−y , ∂x ∂φ = −(1 + xe−y ). ∂y Integrating these two equations in the usual manner yields φ(x, y ) = x2 − y + xe−y . (0.0.5) 119 Therefore Equation (0.0.5) can be written in the equivalent form d(x2 − y + xe−y ) = 0 with general solution x2 − y + xe−y = c. 26. The given diﬀerential equation is ﬁrst-order linear. However, it can also e written in the equivalent form dy = (1 − y ) sin x dx which is separable. Separating the variables and integrating yields − ln |1 − y | = − cos x + c, so that 1 − y = c1 ecos x . Hence, y (x) = 1 − c1 ecos x . 27. For the given diﬀerential equation we have M (x, y ) = 3y 2 + x2 N (x, y ) = −2xy, and so that My − N x 4 =− . N x Consequently, an integrating factor for the given diﬀerential equation is I = e− 4 x dx = x−4 . Multiplying the given diﬀerential equation by x−4 yields the exact diﬀerential equation (3y 2 x−4 + x−2 )dx − 2yx−3 dy = 0. Therefore, there exists a potential function φ satisfying ∂φ = 3y 2 x−4 + x−2 , ∂x ∂φ = −2yx−3 . ∂y Integrating these two equations in the usual manner yields φ(x, y ) = −y 2 x−3 − x−1 . Therefore Equation (0.0.6) can be written in the equivalent form d(−y 2 x−3 − x−1 ) = 0 with general solution −y 2 x−3 − x−1 = c, (0.0.6) 120 or equivalently, x2 + y 2 = c1 x3 . Notice that the given diﬀerential equation can be written in the equivalent form 3y 2 + x2 dy = , dx 2xy which is ﬁrst-order homogeneous. Another equivalent way of writing the given diﬀerential equation is dy 3 1 − y = xy −1 , dx 2x 2 which is a Bernoulli equation. 28. The given diﬀerential equation can be written in the equivalent form 1 9 dy − y = − x2 y 3 , dx 2x ln x 2 which is a Bernoulli equation with n = 3. We therefore divide the equation by y 3 to obtain y −3 1 9 dy − y −2 = − x2 . dx 2x ln x 2 We now make the change of variables u = y −2 , in which case, the preceding diﬀerential equation yields − du dx dy = −2y −3 dx . Inserting these results into 1 9 1 du − u = − x2 , 2 dx 2x ln x 2 or, in standard form, du 1 + u = 9x2 . dx x ln x An integrating factor for this linear diﬀerential equation is I=e 1 x ln x dx = eln(ln x) = ln x. Multiplying the previous diﬀerential equation by ln x reduces it to d (ln x · u) = 9x2 ln x dx which can be integrated to obtain ln x · u = x3 (3 ln x − 1) + c so that u= x3 (3 ln x − 1) + c . ln x Making the replacement u = y 3 in this equation gives y3 = x3 (3 ln x − 1) + c . ln x 121 29. Separating the variables in the given diﬀerential equation yields 1 dy 2+x 1 = =1+ , y dx 1+x 1+x which can be integrated to obtain ln |y | = x + ln |1 + x| + c. Exponentiating both sides of this equation gives y (x) = c1 (1 + x)ex . 30. The given diﬀerential equation can be written in the equivalent form 2 dy +2 y=1 dx x − 1 which is ﬁrst-order linear. An integrating factor is I=e 2 dx x2 −1 1 1 x−1 = e ( x−1 − x+1 )dx = e[ln(x−1)−ln(x+1)] = . x+1 Multiplying (0.0.7) by (x − 1)/(x + 1) reduces it to the integrable form d dx x−1 ·y x+1 = x−1 2 =1− . x+1 x+1 Integrating both sides of this diﬀerential equation yields x−1 ·y x+1 so that y (x) = = x − 2 ln(x + 1) + c x+1 x−1 [x − 2 ln(x + 1) + c]. 31. The given diﬀerential equation can be written in the equivalent form [y sec2 (xy ) + 2x]dx + x sec2 (xy )dy = 0 Then My = sec2 (xy ) + 2xy sec2 (x) tan(xy ) = Nx so that the diﬀerential equation is exact. Consequently, there is a potential function satisfying ∂φ = y sec2 (xy ) + 2x, ∂x ∂φ = x sec2 (xy ). ∂y Integrating these two equations in the usual manner yields φ(x, y ) = x2 + tan(xy ), so that the diﬀerential equation can be written as d(x2 + tan(xy )) = 0, (0.0.7) 122 and therefore has general solution x2 + tan(xy ) = c, or equivalently, y (x) = tan−1 (c − x2 ) . x 32. CHANGE PROBLEM IN TEXT TO dy √ + 4xy = 4x y. dx The given diﬀerential equation is a Bernoulli equation with n = 1 . We therefore divide the equation by 2 y to obtain 1 1 dy − 4xy 2 = 4x. y− 2 dx 1 2 1 1 dy We now make the change of variables u = y 2 , in which case, du = 1 y − 2 dx . Inserting these results into the dx 2 preceding diﬀerential equation yields du 2 + 4xu = 4x, dx or, in standard form, du + 2xu = 2x. dx An integrating factor for this linear diﬀerential equation is I=e 2x dx 2 = ex . 2 Multiplying the previous diﬀerential equation by ex reduces it to 2 2 d ex u = 2xex . dx which can be integrated directly to obtain 2 2 ex u = ex + c so that 2 u = 1 + cex . 1 Making the replacement u = y 2 in this equation gives 1 2 y 2 = 1 + cex . 33. CHANGE PROBLEM IN TEXT TO dy x2 y =2 + dx x + y2 x then the answer is correct. The given diﬀerential equation is ﬁrst-order homogeneous. Inserting y = xV into the given equation yields dV 1 x +V = + V, dx 1+V2 123 that is, (1 + V 2 ) 1 dV =. dx x Integrating we obtain 1 V + V 3 = ln |x| + c. 3 Inserting V = y/x into the preceding equation yields y y3 + 3 = ln |x| + c. x 3x 34. For the given diﬀerential equation we have My = 1 = Nx y so that the diﬀerential equation is exact. Consequently, there is a potential function satisfying ∂φ = ln(xy ) + 1, ∂x ∂φ x = + 2y. ∂y y Integrating these two equations in the usual manner yields φ(x, y ) = x ln(xy ) + y 2 , so that the diﬀerential equation can be written as d[x ln(xy ) + y 2 ] = 0, and therefore has general solution x ln(xy ) + y 2 = c. 35. The given diﬀerential equation is a Bernoulli equation with n = −1. We therefore divide the equation by y −1 to obtain dy 1 25 ln x y + y2 = . dx x 2x3 dy We now make the change of variables u = y 2 , in which case, du = 2y dx . Inserting these results into the dx preceding diﬀerential equation yields 1 du 1 25 ln x + u= , 2 dx x 2x3 or, in standard form, du 2 + u = 25x−3 ln x. dx x An integrating factor for this linear diﬀerential equation is I=e 2 x dx = x2 . Multiplying the previous diﬀerential equation by x2 reduces it to d2 (x u) = 25x−1 ln x, dx 124 which can be integrated directly to obtain x2 u = 25 (ln x)2 + c 2 so that 25(ln x)2 + c . 2x2 Making the replacement u = y 2 in this equation gives u= y2 = 25(ln x)2 + c . 2x2 36. The problem as written is separable, but the integration does not work. CHANGE PROBLEM TO: (1 + y ) dy = xex−y . dx The given diﬀerential equation can be written in the equivalent form ey (1 + y ) dy = xex dx which is separable. Integrating both sides of this equation gives yey = ex (x − 1) + c. 37. The given diﬀerential equation can be written in the equivalent form dy cos x − y = − cos x dx sin x which is ﬁrst order linear with integrating factor I = e− cos x sin x dx Multiplying the preceding diﬀerential equation by d dx = e− ln(sin x) = 1 sin x 1 ·y sin x 1 . sin x reduces it to =− cos x sin x which can be integrated directly to obtain 1 · y = − ln(sin x) + c sin x so that y (x) = sin x[c − ln(sin x)]. 38. The given diﬀerential equation is linear, and therefore can be solved using an appropriate integrating factor. However, if we rearrange the terms in the given diﬀerential equation then it can be written in the equivalent form 1 dy = x2 1 + y dx 125 which is separable. Integrating both sides of the preceding diﬀerential equation yields 13 x +c 3 ln(1 + y ) = so that 1 3 y (x) = c1 e 3 x − 1. Imposing the initial condition y (0) = 5 we ﬁnd c1 = 6. Therefore the solution to the initial-value problem is 1 3 y (x) = 6e 3 x − 1. 39. The given diﬀerential equation can be written in the equivalent form e−6y dy = −e−4x dx which is separable. Integrating both sides of the preceding equation yields 1 1 − e−6y = e−4x + c 6 4 so that 1 3 y (x) = − ln c1 − e−4x . 6 2 Imposing the initial condition y (0) = 0 requires that 0 = ln c1 − Hence, c1 = 5 , and so 2 1 y (x) = − ln 6 3 2 . 5 − 3e−4x 2 . 40. For the given diﬀerential equation we have My = 4xy = Nx so that the diﬀerential equation is exact. Consequently, there is a potential function satisfying ∂φ = 3x2 + 2xy 2 , ∂x ∂φ = 2x2 y. ∂y Integrating these two equations in the usual manner yields φ(x, y ) = x2 y 2 + x3 , so that the diﬀerential equation can be written as d(x2 y 2 + x3 ) = 0, and therefore has general solution x2 y 2 + x3 = c. 126 Imposing the initial condition y (1) = 3 yields c = 10. Therefore, x2 y 2 + x3 = 10 so that 10 − x3 . x2 Note that the given diﬀerential equation can be written in the equivalent form y2 = dy 1 3 + y = − y −1 , dx x 2 which is a Bernoulli equation with n = −1. Consequently, the Bernoulli technique could also have been used to solve the diﬀerential equation. 41. The given diﬀerential equation is linear with integrating factor I = e− sin x dx = ecos x . Multiplying the given diﬀerential equation by ecos x reduces it to the integrable form d cos x (e · y ) = 1, dx which can be integrated directly to obtain ecos x · y = x + c.. Hence, y (x) = e− cos x (x + c). Imposing the given initial condition y (0) = 1 e requires that c = 1. Consequently, y (x) = e− cos x (x + 1). 42. (a) For the given diﬀerential equation we have My = my m−1 , Nx = −nxn−1 y 3 . We see that the only values for m and n for which My = Nx are m = n0. Consequently, these are the only values of m and n for which the diﬀerential equation is exact. (b) We rewrite the given diﬀerential equation in the equivalent form dy x5 + y m = , dx xn y 3 (0.0.8) from which we see that the diﬀerential equation is separable provided m = 0. In this case there are no restrictions on n. (c) From Equation (0.0.8) we see that the only values of m and n for which the diﬀerential equation is ﬁrst-order homogeneous are m = 5 and n = 2. 127 (d) We now rewrite the given diﬀerential equation in the equivalent form dy − x−n y m−3 = x5−n y −3 . dx (0.0.9) Due to the y −3 term on the right-hand side of the preceding diﬀerential equation, it follows that there are no values of m and n for which the equation is linear. (e) From Equation (0.0.9) we see that the diﬀerential equation is a Bernoulli equation whenever m = 4. There are no constraints on n in this case. 43. In Newton’s Law of Cooling we have Tm = 180◦ F, T (0) = 80◦ F, T (3) = 100◦ F. We need to determine the time, t0 when T (t0 ) = 140◦ F. The temperature of the sandals at time t is governed by the diﬀerential equation dT = −k (T − 180). dt This separable diﬀerential equation is easily integrated to obtain T (t) = 180 + ce−kt . Since T (0) = 80 we have 80 = 180 + c =⇒ c = −100. Hence, T (t) = 180 − 100e−kt . Imposing the condition T (3) = 100 requires 100 = 180 − 100e−3k . Solving for k we ﬁnd k = 1 3 ln 5 4 . Inserting this value for k into the preceding expression for T (t) yields t 5 T (t) = 180 − 100e− 3 ln( 4 ) . We need to ﬁnd t0 such that t0 140 = 180 − 100e− 3 ln( 5 ) 4 . Solving for t0 we ﬁnd t0 = 3 ln ln 5 2 5 4 ≈ 12.32 min. 44. In Newton’s Law of Cooling we have Tm = 70◦ F, T (0) = 150◦ F, T (10) = 125◦ F. We need to determine the time, t0 when T (t0 ) = 100◦ F. The temperature of the plate at time t is governed by the diﬀerential equation dT = −k (T − 70). dt 128 This separable diﬀerential equation is easily integrated to obtain T (t) = 70 + ce−kt . Since T (0) = 150 we have 150 = 70 + c =⇒ c = 80. Hence, T (t) = 70 + 80e−kt . Imposing the condition T (10) = 125 requires 125 = 70 + 80e−10k . Solving for k we ﬁnd k = 1 10 ln 16 11 . Inserting this value for k into the preceding expression for T (t) yields 16 t T (t) = 70 + 80e− 10 ln( 11 ) . We need to ﬁnd t0 such that t0 16 100 = 70 + 80e− 10 ln( 11 ) . Solving for t0 we ﬁnd t0 = 10 ln ln 8 3 16 11 ≈ 26.18 min. 45. Let T (t) denote the temperature of the object at time t, and let Tm denote the temperature of the surrounding medium. Then we must solve the initial-value problem dT = k (T − Tm )2 , dt T (0) = T0 , where k is a constant. The diﬀerential equation can be written in separated form as 1 dT = k. (T − Tm )2 dt Integrating both sides of this diﬀerential equation yields − 1 = kt + c T − Tm so that T (t) = Tm − 1 . kt + c Imposing the initial condition T (0) = T0 we ﬁnd that c= 1 Tm − T0 which, when substituted back into the preceding expression for T (t) yields T (t) = Tm − 1 kt + 1 Tm −T0 = Tm − Tm − T0 . k (Tm − T0 )t + 1 129 As t → ∞, T (t) approaches Tm . 46. We are given the diﬀerential equation dT = −k (T − 5 cos 2t) dt (0.0.10) dT (0) = 5. dt (0.0.11) together with the initial conditions T (0) = 0; (a) Setting t = 0 in (0.0.10) and using (0.0.11) yields 5 = −k (0 − 5) so that k = 1. (b) Substituting k = 1 into the diﬀerential equation (0.0.10) and rearranging terms yields dT + T = 5 cos t. dt An integrating factor for this linear diﬀerential equation is I = e diﬀerential equation by et reduces it to dt = et . Multiplying the preceding dt (e · T ) = 5et cos 2t dt which upon integration yields et · T = et (cos 2t + 2 sin 2t) + c, so that T (t) = ce−t + cos 2t + 2 sin 2t. Imposing the initial condition T (0) = 0 we ﬁnd that c = −1. Hence, T (t) = cos 2t + 2 sin 2t − e−t . (c) For large values of t we have T (t) ≈ cos 2t + 2 sin 2t, which can be written in phase-amplitude form as T (t) ≈ √ 5 cos(2t − φ), where tan φ = 2. consequently, for large t, the temperature is approximately oscillatory with period π and √ amplitude 5. 47. If we let C (t) denote the number of sandhill cranes in the Platte River valley t days after April 1, then C (t) is governed by the diﬀerential equation dC = −kC dt (0.0.12) 130 together with the auxiliary conditions C (0) = 500, 000; C (15) = 100, 000. (0.0.13) Separating the variables in the diﬀerential equation (0.0.12) yields 1 dC = −k, C dt which can be integrated directly to obtain ln C = −kt + c. Exponentiation yields C (t) = c0 e−kt . The initial condition C (0) = 500, 000 requires c0 = 500, 000, so that C (t) = 500, 000e−kt . (0.0.14) Imposing the auxiliary condition C (15) = 100, 000 yields 100, 000 = 500, 000e−15k . Taking the natural logarithm of both sides of the preceding equation and simplifying we ﬁnd that k = Substituting this value for k into (0.0.14) gives t C (t) = 500, 000e− 15 ln 5 . (a) C (3) = 500, 000e−2 ln 5 = 500, 000 · 35 − 15 (b) C (35) = 500, 000e ln 5 1 25 1 15 ln 5. (0.0.15) = 20, 000 sandhile cranes. ≈ 11696 sandhile cranes. (c) We need to determine t0 such that t0 1000 = 500, 000e− 15 ln 5 that is, 1 . 500 Taking the natural logarithm of both sides of this equation and simplifying yields t0 e− 15 ln 5 = t0 = 15 · ln 500 ≈ 57.9 days after April 1. ln 5 48. Substituting P0 = 200, 000 into Equation (1.5.3) in the text (page 45) yields P (t) = 200, 000C . 200, 000 + (C − 200, 000)e−rt We are given P (3) = P (t1 ) = 230, 000, P (6) = P (t2 ) = 250, 000. (0.0.16) 131 Since t2 = 2t1 we can use the formulas (1.5.5) and (1.5.6) on page 47 of the text to obtain r and C directly as follows: 1 25(23 − 20) 1 15 r = ln = ln ≈ 0.21. 3 20(25 − 23) 3 8 C= 230, 000[(23)(45) − (40)(25)] = 277586. (23)2 − (20)(25) Substituting these values for r and C into (0.0.16) yields P (t) = 55517200000 . 200, 000 + (77586)e−0.21t Therefore, P (10) = 55517200000 ≈ 264, 997, 200, 000 + (77586)e−2.1 P (20) = 55517200000 ≈ 275981. 200, 000 + (77586)e−4.2 and 49. The diﬀerential equation for determining q (t) is dq 5 3 + q = cos 2t, dt 4 2 5 4 dt which has integrating factor I = e it to the integrable form 5 5 = e 4 t . Multiplying the preceding diﬀerential equation by e 4 t reduces 5 d 35 e 4 t · q = e 4 t cos 2t. dt 2 Integrating and simplifying we ﬁnd q (t) = 5 6 (5 cos 2t + 8 sin 2t) + ce− 4 t . 89 The initial condition q (0) = 3 requires 3= so that c = 237 89 . 30 + c, 89 Making this replacement in (0.0.17) yields q (t) = 6 237 − 5 t (5 cos 2t + 8 sin 2t) + e 4. 89 89 The current in the circuit is i(t) = dq 12 1185 − 5 t = (8 cos 2t − 5 sin 2t) − e 4. dt 89 356 Answer in text has incorrect exponent. 50. The current in the circuit is governed by the diﬀerential equation di 100 + 10i = , dt 3 (0.0.17) 132 which has integrating factor I = e reduces it to the integrable form 10 dt = e10t . Multiplying the preceding diﬀerential equation by e10t 100 10t d 10t e ·i = e. dt 3 Integrating and simplifying we ﬁnd i(t) = 10 + ce−10t . 3 (0.0.18) The initial condition i(0) = 3 requires 10 + c, 3 so that c = − 1 . Making this replacement in (0.0.18) yields 3 3= i(t) = 1 (10 − e−10t ). 3 51. We are given: r1 = 6 L/min, c1 = 3 g/L, r2 = 4 L/min, V (0) = 30 L, A(0) = 0 g, and we need to determine the amount of salt in the tank when V (t) = 60L. Consider a small time interval ∆t. Using the preceding information we have: ∆V = 6∆t − 4∆t = 2∆t, and A ∆t. V Dividing both of these equations by ∆t and letting ∆t → 0 yields ∆A ≈ 18∆t − 4 dV = 2. dt (0.0.19) dA A + 4 = 18. dt V Integrating (0.0.19) and imposing the initial condition V (0) = 30 yields V (t) = 2(t + 15). (0.0.20) (0.0.21) We now insert this expression for V (t) into (0.0.20) to obtain dA 2 + A = 18. dt t + 15 2 An integrating factor for this diﬀerential equation is I = e t+15 dt = (t + 15)2 . Multiplying the preceding diﬀerential equation by (t + 15)2 reduces it to the integrable form d (t + 15)2 A = 18(t + 15)2 . dt Integrating and simplifying we ﬁnd A(t) = 6(t + 15)3 + c . (t + 15)2 133 Imposing the initial condition A(0) = 0 requires 0= 6(15)3 + c , (15)2 so that c = −20250. Consequently, 6(t + 15)3 − 20250 . (t + 15)2 We need to determine the time when the solution overﬂows. Since the tank can hold 60 L of solution, from (0.0.21) overﬂow will occur when 60 = 2(t + 15) =⇒ t = 15. A(t) = The amount of chemical in the tank at this time is 6(30)3 − 20250 ≈ 157.5 g. A(15) = (30)2 52. Applying Euler’s method with y = x2 + 2y 2 , x0 = 0, y0 = −3, and h = 0.1 we have yn+1 = yn + 0.1(x2 + n 2 2yn ). This generates the sequence of approximants given in the table below. n 1 2 3 4 5 6 7 8 9 10 xn 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 yn −1.2 −0.911 −0.74102 −0.62219 −0.52877 −0.44785 −0.371736 −0.29510 −0.21368 −0.12355 Consequently the Euler approximation to y (1) is y10 = −0.12355. 53. Applying Euler’s method with y = 3x + 2, x0 = 1, y0 = 2, and h = 0.05 we have y yn+1 = yn + 0.05 3xn +2 . yn This generates the sequence of approximants given in the table below. n 1 2 3 4 5 6 7 8 9 10 xn 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 yn 2.1750 2.34741 2.51770 2.68622 2.85323 3.01894 3.18353 3.34714 3.50988 3.67185 134 Consequently, the Euler approximation to y (1.5) is y10 = 3.67185. 54. Applying the modiﬁed Euler method with y = x2 + 2y 2 , x0 = 0, y0 = −3, and h = 0.1 generates the sequence of approximants given in the table below. n 1 2 3 4 5 6 7 8 9 10 xn 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 yn −1.9555 −1.42906 −1.11499 −0.90466 −0.74976 −0.62555 −0.51778 −0.41723 −0.31719 −0.21196 Consequently, the modiﬁed Euler approximation to y (1) is y10 = −0.21196. Comparing this to the corresponding Euler approximation from Problem 52 we have |yME − yE | = |0.21196 − 0.12355| = 0.8841. 55. Applying the modiﬁed Euler method with y = 3x + 2, x0 = 1, y0 = 2, and h = 0.05 generates the y sequence of approximants given in the table below. n 1 2 3 4 5 6 7 8 9 10 xn 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 yn 2.17371 2.34510 2.51457 2.68241 2.84886 3.01411 3.17831 3.34159 3.50404 3.66576 Consequently, the modiﬁed Euler approximation to y (1.5) is y10 = 3.66576. Comparing this to the corresponding Euler approximation from Problem 53 we have |yME − yE | = |3.66576 − 3.67185| = 0.00609. 56. Applying the Runge-Kutta method with y = x2 + 2y 2 , x0 = 0, y0 = −3, and h = 0.1 generates the sequence of approximants given in the table below. 135 n 1 2 3 4 5 6 7 8 9 10 xn 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 yn −1.87392 −1.36127 −1.06476 −0.86734 −0.72143 −0.60353 −0.50028 −0.40303 −0.30541 −0.20195 Consequently the Runge-Kutta approximation to y (1) is y10 = −0.20195. Comparing this to the corresponding Euler approximation from Problem 52 we have |yRK − yE | = |0.20195 − 0.12355| = 0.07840. 57. Applying the Runge-Kutta method with y = 3x + 2, x0 = 1, y0 = 2, and h = 0.05 generates the y sequence of approximants given in the table below. n 1 2 3 4 5 6 7 8 9 10 xn 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 yn 2.17369 2.34506 2.51452 2.68235 2.84880 3.01404 3.17823 3.34151 3.50396 3.66568 Consequently the Runge-Kutta approximation to y (1.5) is y10 = 3.66568. Comparing this to the corresponding Euler approximation from Problem 53 we have |yRK − yE | = |3.66568 − 3.67185| = 0.00617. Last digit in answer the text needs changing. Solutions to Section 2.1 True-False Review: 1. TRUE. A diagonal matrix has no entries below the main diagonal, so it is upper triangular. Likewise, it has no entries above the main diagonal, so it is also lower triangular. 2. FALSE. An m × n matrix has m row vectors and n column vectors. 136 3. TRUE. Since A is symmetric, A = AT . Thus, (AT )T = A = AT , so AT is symmetric. 4. FALSE. The trace of a matrix is the sum of the entries along the main diagonal. 5. TRUE. If A is skew-symmetric, then AT = −A. But A and AT contain the same entries along the main diagonal, so for AT = −A, both A and −A must have the same main diagonal. This is only possible if all entries along the main diagonal are 0. 6. TRUE. If A is both symmetric and skew-symmetric, then A = AT = −A, and A = −A is only possible if all entries of A are zero. 7. TRUE. Both matrix functions are deﬁned for values of t such that t > 0. 8. FALSE. The (3, 2)-entry contains a function that is not deﬁned for values of t with t ≤ 3. So for example, this matrix functions is not deﬁned for t = 2. 9. TRUE. Each numerical entry of the matrix function is a constant function, which has domain R. 10. FALSE. For instance, the matrix function A(t) = [t] and B (t) = [t2 ] satisfy A(0) = B (0), but A and B are not the same matrix function. Problems: 1. a31 = 0, a24 = −1, a14 = 2, a32 = 2, a21 = 7, a34 = 4. 2. 1 −1 3. 2 0 4. 5. 6. 5 3 ; 2 × 2 matrix. 1 −1 4 −2 ; 2 × 3 matrix. −1 1 ; 4 × 1 matrix. 1 −5 1 −3 −2 3 6 0 ; 4 × 3 matrix. 2 7 4 −4 −1 5 0 −1 2 1 0 3 ; 3 × 3 matrix. −2 −3 0 7. tr(A) = 1 + 3 = 4. 8. tr(A) = 1 + 2 + (−3) = 0. 9. tr(A) = 2 + 2 + (−5) = −1. 1 −1 , . 3 5 Row vectors: [1 − 1], [3 5]. 1 3 −4 11. Column vectors: −1 , −2 , 5 . 2 6 7 Row vectors: [1 3 − 4], [−1 − 2 5], [2 6 7]. 10. Column vectors: 137 12. Column vectors: 2 5 10 −1 , , 6 3 . Row vectors: [2 10 6], [5 − 1 3]. 2 1 2 4 . Column vectors: 3 , 4 . 1 5 1 2 501 −1 7 0 2 . Row vectors: [2 5 0 1], [−1 7 0 2], [4 − 6 0 3]. 14. B = 4 −6 0 3 1 13. A = 3 5 15. A = [a1 , a2 , . . . , ap ] has p columns and each column dimensions q × p. 20 0 0 . 16. One example: 0 3 0 0 −1 2312 0 5 6 2 17. One example: 0 0 3 5 . 0001 1 3 −1 2 −3 0 4 −3 . 18. One example: 1 −4 0 1 −2 3 −1 0 300 19. One example: 0 2 0 . 005 00 20. The only possibility here is the zero matrix: 0 0 00 √ 1 √ t+2 0 3−t . 21. One example: 0 0 0 2 t −t 0 0 0 . 22. One example: 0 0 0 0 23. One example: 24. One example: t2 + 1 1 t2 +1 0 1 1 1 q -vector has q rows, so the resulting matrix has 0 0 . 0 1. . 25. One example: Let A and B be 1 × 1 matrix functions given by A(t) = [t] and B (t) = [t2 ]. 26. Let A be a symmetric upper-triangular matrix. Then all elements below the main diagonal are zeros. Consequently, since A is symmetric, all elements above the main diagonal must also be zero. Hence, the 138 only nonzero entries can occur along the main diagonal. That is, A is a diagonal matrix. 27. Since A is skew-symmetric, a11 a22 = a33 =0. Further, a12 = −a21 = −1, a13 = −a31 = −3, and = 0 −1 −3 0 −1 . a32 = −a23 = 1. Consequently, A = 1 3 1 0 Solutions to Section 2.2 True-False Review: 1. FALSE. The correct statement is (AB )C = A(BC ), the associative law. A counterexample to the particular statement given in this review item can be found in Problem 7. 2. TRUE. Multiplying from left to right, we note that AB is an m × p matrix, and right multiplying AB by the p × q matrix C , we see that ABC is an m × q matrix. 3. TRUE. We have (A + B )T = AT + B T = A + B , so A + B is 010 00 4. FALSE. For example, let A = −1 0 0 , B = 0 0 000 −3 0 00 0 but AB = 0 0 −3 is not symmetric. 00 0 symmetric. 3 0 . Then A and B are skew-symmetric, 0 5. FALSE. The correct equation is (A + B )2 = A2 + AB + BA + B 2 . The statement is false since AB + BA 10 01 11 does not necessarily equal 2AB . For instance, if A = and B = , then (A + B )2 = 00 00 00 12 and A2 + 2AB + B 2 = = (A + B )2 . 00 6. FALSE. For example, let A = 0 0 1 0 and B = 1 0 0 0 . Then AB = 0 even though A = 0 and B = 0. 00 00 and let B = . Then A is not upper triangular, despite 10 00 the fact that AB is the zero matrix, hence automatically upper triangular. 7. FALSE. For example, let A = 8. FALSE. For instance, the matrix A = 1 0 0 0 is neither the zero matrix nor the identity matrix, and 2 yet A = A. 9. TRUE. The derivative of each entry of the matrix is zero, since in each entry, we take the derivative of a constant, thus obtaining zero for each entry of the derivative of the matrix. 10. FALSE. The correct statement is given in Problem 41. The problem with the statement as given is that the second term should be dA B , not B dA . dt dt 11. FALSE. For instance, the matrix function A = form cet 0 0 cet . 2et 0 0 3et satisﬁes A = dA dt , but A does not have the 139 12. TRUE. This follows by exactly the same proof as given in the text for matrices of numbers (see part 3 of Theorem 2.2.21). Problems: 1. 2A = 2 6 4 −2 10 4 −6 3 −9 −3 −12 −15 , −3B = A − 2B = = 1 3 , 2 −1 5 2 − 4 −2 2 8 6 10 −3 4 −7 1 −3 −8 , 3A + 4B = = 3 9 11 13 6 −3 15 6 2 31 9 26 + 8 −4 12 4 16 20 . 2. Solving for D, we have 2A + B − 3C + 2D = A + 4C 2D = −A − B + 7C 1 D = (−A − B + 7C ). 2 When appropriate substitutions are made for A,B , and C , we obtain: −5 −2.5 2.5 6.5 9 . D = 1.5 −2.5 2.5 −0.5 3. 5 10 −3 27 22 3 AB = 9 BC = 8 , −6 , DC = [10], 2 −2 3 2 −3 . CD = −2 4 −4 6 DB = [6 14 − 4], CA and AD cannot be computed. 4. AB = 2−i 1+i −i 2 + 4i = (2 − i)i + (1 + i)0 −i(i) + (2 + 4i)0 = 1 + 2 i 2 − 2i 1 1 + 17i i 1 − 3i 0 4+i (2 − i)(1 − 3i) + (1 + i)(4 + i) −i(1 − 3i) + (2 + 4i)(4 + i) . 140 5. AB = 3 + 2i 2 − 4i 5 + i −1 + 3i −1 + i 3 + 2i 4 − 3i 1 + i = (3 + 2i)(−1 + i) + (2 − 4i)(4 − 3i) (5 + i)(−1 + i) + (−1 + 3i)(4 − 3i) = −9 − 21i 11 + 10i −1 + 19i 9 + 15i 6. 3 − 2i i −i 1 AB = (3 + 2i)(3 + 2i) + (2 − 4i)(1 + i) (5 + i)(3 + 2i) + (−1 + 3i)(1 + i) . −1 + i 2 − i 0 1 + 5i 0 3 − 2i = (3 − 2i)(−1 + i) + i(1 + 5i) −i(−1 + i) + 1(1 + 5i) = (3 − 2i)(2 − i) + i · 0 −i(2 − i) + 1 · 0 −6 + 6i 4 − 7i 2 + 3i 2 + 6i −1 − 2i 3 − 2i 7. (3 − 2i)0 + i(3 − 2i) −i · 0 + 1(3 − 2i) 3 2 1 5 4 −3 C −1 6 1 −1 2 3 −2 346 ABC = 7 7 = CAB = = −3 2 1 −4 −3 2 1 −4 9 35 1 −1 2 3 −2 346 −7 9 2 9 −13 −14 3 −21 8. (2A − 3B )C = = 2 = −12 −22 14 −126 B 3 2 1 5 4 −3 = −1 6 1 −2 3 1 5 −10 −9 −7 . −3 −1 5 3 −1 −7 43 −21 −131 2 3 C 25 −20 = . . 9. Ac = 10. 11. 1 −5 3 4 6 −2 =6 1 −5 + (−2) 3 4 = 0 −38 . 4 −13 3 −1 4 2 3 −1 1 5 3 = 2 2 + 3 1 + (−4) 5 = −13 . Ac = 2 7 −6 3 −4 7 −6 3 −16 −1 2 7 Ac = 4 5 −4 5 −1 −1 2 −7 = 5 4 + (−1) 7 = 13 . 5 −4 29 141 12. The dimensions of B should be n × r in order that ABC is deﬁned. The elements of the ith row of A are ai1 , ai2 , . . . , ain and the elements of the j th column of BC are r r r b1m cmj , b2m cmj , . . . , m=1 m=1 bnm cmj , m=1 so the element in the ith row and j th column of ABC = A(BC ) is r r ai1 m=1 m=1 n = r b2m cmj + · · · + ain b1m cmj + ai2 r aik n r k=1 bkm cmj m=1 = m=1 k=1 bnm cmj m=1 aik bkm cmj . −1 −4 8 7 . 13. (a): A2 = AA = 1 −1 2 3 1 −1 2 3 A3 = A2 A = −1 −4 8 7 A4 = A3 A = −9 −11 22 13 = 1 −1 2 3 −9 −11 22 13 = 1 −1 2 3 . −31 −24 48 17 = . (b): 0 10 0 10 −2 0 1 0 1 −2 0 1 = 4 −3 0 . A2 = AA = −2 4 −1 0 4 −1 0 2 4 −1 0 10 4 −3 0 −2 0 1 0 1 = 6 4 −3 . 0 −2 A3 = A2 A = 4 −3 4 −1 0 −12 3 4 2 4 −1 4 −3 0 0 10 6 4 −3 6 4 −3 −2 0 1 = −20 9 4 . A4 = A3 A = −12 3 4 4 −1 0 10 −16 3 14. (a): (A + B )2 = (A + B )(A + B ) = A(A + B ) + B (A + B ) = A2 + AB + BA + B 2 . (b): (A − B )2 = (A + (−1)B )2 = A2 + A(−1)B + (−1)BA + [(−1)B ]2 2 by part (a) 2 = A − AB − BA + B . 15. A2 − 2A − 8I2 = 14 −10 −2 6 −2 = 14 −10 −2 6 + 3 −1 −5 −1 −6 2 10 2 + −8 1 0 −8 0 0 −8 0 1 = 02 . 142 16. 100 A2 = 0 1 0 − 001 1xz Substituting A = 0 1 y for A, we have 001 1xz 1 0 1 y 0 001 0 0 −1 0 1 0 0 −1 = 0 0 0 0 0 xz 1 1 y = 0 01 0 1 1 0 0 1 . 1 0 1 , 1 1 1 0 that is, 1 1 2x 2z + xy = 0 0 1 2y 0 00 1 1 1 0 0 1 . 1 Since corresponding elements of equal matrices are equal, we obtain the following implications: 1 Thus, A = 0 0 2y = 1 =⇒ y = 1/2, 2x = 1 =⇒ x = 1/2, 2z + xy = 0 =⇒ 2z + (1/2)(1/2) = 0 =⇒ z = −1/8. 1/2 −1/8 1 1/2 . 0 1 x1 x1 −2 y −2 y x1 x2 − x − 2 x+y−1 , or equivalently, −2 y −2x − 2y + 2 y 2 − y − 2 trices are equal, it follows that 17. In order that A2 = A, we require = x1 −2 y , that is, x2 − 2 −2x − 2y y 2 − y − 2 = 0 =⇒ y = −1 or y = 2. Two cases arise from x + y − 1 = 0: (a): If x = −1, then y = 2. (b): If x = 2, then y = −1. Thus, 18. 1 0 −1 −2 1 2 0 −i i 0 σ1 σ2 = 0 1 σ2 σ3 = 0 −i i 0 1 0 0 −1 σ3 σ1 = 1 0 0 −1 0 1 1 0 2 1 −2 −1 or A = = = = = = 02 . Since corresponding elements of equal ma- x2 − x − 2 = 0 =⇒ x = −1 or x = 2, and A= x+y −2 + y 2 . i 0 0 −i =i 1 0 0 −1 0i i0 =i 0 1 =i 0 −i i 0 0 −1 1 0 1 0 = iσ3 . = iσ1 . = iσ2 . 143 19. [A, B ] = AB − BA = 3 4 = −1 −1 10 4 − = 20. 1 −1 2 1 −6 2 1 6 1 2 3 4 − 1 −1 2 1 1 2 5 −2 8 −2 = 02 . [A1 , A2 ] = A1 A2 − A2 A1 = 1 0 0 1 = 0 0 1 0 0 0 1 0 0 0 − 0 0 − 1 0 1 0 1 0 0 1 = 02 , thus A1 and A2 commute. [A1 , A3 ] = A1 A3 − A3 A1 = 1 0 0 1 = 0 1 0 0 0 1 0 0 0 1 − 0 1 − 0 0 0 0 1 0 0 1 = 02 , thus A1 and A3 commute. [A2 , A3 ] = A2 A3 − A3 A2 = 21. 1 0 = Then [A3 , A2 ] = −[A2 , A3 ] = 0 0 1 0 0 0 −1 0 0 1 0 1 0 0 0 0 − 0 1 − 0 1 = 0 0 1 0 0 −1 0 0 1 0 = 02 . = 02 . Thus, A2 and A3 do not commute. [A1 , A2 ] = A1 A2 − A2 A1 1 4 1 = 4 1 = 4 = 0 −1 1 0 0i i0 i 0 0 −i 1 4 − 2i 0 0 −2i = − 1 4 0 −1 1 0 0i i0 −i 0 0i 1 2 i 0 0 −i = A3 . [A2 , A3 ] = A2 A3 − A3 A2 1 4 1 = 4 1 = 4 = 0 −1 1 0 0i i0 0 2i 2i 0 i 0 0 −i − 1 4 = − 1 4 i 0 0 −1 0 −i −i 0 1 2 0i i0 = A1 . 0 −1 1 0 144 [A3 , A1 ] = A3 A1 − A1 A3 1 4 1 = 4 1 = 4 i 0 0 −i 0i i0 0 −1 −1 0 = − 0 −2 2 0 = − 1 4 1 2 0 1 1 4 0i i0 i 0 0 −i 1 0 0 −1 1 0 = A2 . 22. [A, [B, C ]] + [B, [C, A]] + [C, [A, B ]] = [A, BC − CB ] + [B, CA − AC ] + [C, AB − BA] = A(BC − CB ) − (BC − CB )A + B (CA − AC ) − (CA − AC )B + C (AB − BA) − (AB − BA)C = ABC − ACB − BCA + CBA + BCA − BAC − CAB + ACB + CAB − CBA − ABC + BAC = 0. 23. Proof that A(BC ) = (AB )C : Let A = [aij ] be of size m × n, B = [bjk ] be of size n × p, and C = [ckl ] be of size p × q . Consider the (i, j )-element of (AB )C : p n [(AB )C ]ij = k=1 p n aih bhk aih ckj = bhk ckj h=1 h=1 = [A(BC )]ij . k=1 Proof that A(B + C ) = AB + AC : We have n [A(B + C )]ij = aik (bkj + ckj ) k=1 n (aij bkj + aik ckj ) = k=1 n n = aik bkj + k=1 aik ckj k=1 = [AB + AC ]ij . 24. Proof that (AT )T = A: Let A = [aij ]. Then AT = [aji ], so (AT )T = [aji ]T = aij = A, as needed. Proof that (A + C )T = AT + C T : Let A = [aij ] and C = [cij ]. Then [(A + C )T ]ij = [A + C ]ji = [A]ji + [C ]ji = aji + cji = [AT ]ij + [C T ]ij = [AT + C T ]ij . Hence, (A + C )T = AT + C T . 25. We have m (IA)ij = δik akj = δii aij = aij , k=1 for 1 ≤ i ≤ m and 1 ≤ j ≤ p. Thus, Im Am×p = Am×p . 26. Let A = [aij ] and B = [bij ] be n × n matrices. Then n n tr(AB ) = n aki bik k=1 i=1 n = n bik aki k=1 i=1 n i=1 k=1 = bik aki = tr(BA). 145 27. 1 2 3 −1 0 4 AT = 1 2 −1 4 −3 0 1 −1 1 0 2 AAT = 2 3 4 −1 1 −1 1 2 0 2 AB = 3 4 −1 3 4 −1 0 −1 1 2 1 211 BT = , , 1 2 3 4 19 −8 −2 −1 0 4 = −8 17 −3 4 , 1 2 −1 0 −2 4 26 4 −3 0 01 4 10 4 −3 −1 2 = −4 1 , 0 1 1 −5 10 21 0 1 2 3 0 4 0 −1 1 2 −1 = 10 −4 −5 . 4 1 10 2 −1 1 2111 4 −3 0 B T AT = 28. (a): We have z 2z , z −x −y y S = [s1 , s2 , s3 ] = 0 x −y so 2 AS = 2 1 2 5 2 1 −x −y 2 0 y 2 x −y z −x −y y 2z = 0 z x −y 7z 14z = [s1 , s2 , 7s3 ]. 7z (b): −x 0 S T AS = S T (AS ) = −y y z 2z x −x −y −y 0 y z x −y 7z 2x2 = 0 14z 7z 0 but S T AS = diag(1, 1, 7), so we have the following √ 2 2 √ 3 3y 2 = 1 =⇒ y = ± 3 √ 6 2 . 6z = 1 =⇒ z = ± 6 2 2x = 1 =⇒ x = ± 29. 2 0 (a): 0 0 0 2 0 0 0 0 2 0 0 0 . 0 2 0 3y 2 0 0 0 , 42z 2 146 7 (b): 0 0 0 7 0 0 0 . 7 30. Suppose A is an n × n scalar matrix with trace k . If A = aIn , then tr(A) = na = k , so we conclude that k a = k/n. So A = n In , a uniquely determined matrix. 31. We have ST = and TT = T 1 (A + AT ) 2 1 (A − AT ) 2 = T = 1 1 (A + AT )T = (AT + A) = S 2 2 1T 1 (A − A) = − (A − AT ) = −T. 2 2 Thus, S is symmetric and T is skew-symmetric. 32. 1 1 3 S= 2 7 1 1 T = 3 2 7 13 7 1 −1 5 −5 3 2 −2 10 1 2 1 . 2 4 + −5 2 −2 = −2 4 2 = −1 2 34 6 5 16 −2 6 10 2 12 −5 3 13 7 0 −8 −4 0 −4 −2 1 2 4 − −5 2 −2 = 8 0 6 = 4 0 3 . 2 −2 6 34 6 4 −6 0 2 −3 0 33. If A is an n × n symmetric matrix, then AT = A, so it follows that T= 1 1 (A − AT ) = (A − A) = 0n . 2 2 If A is an n × n skew-symmetric matrix, then AT = −A and it follows that S= 1 1 (A + AT ) = (A + (−A)) = 0n . 2 2 34. Let A be any n × n matrix. Then A= 1 1 1 1 (2A) = (A + AT + A − AT ) = (A + AT ) + (A − AT ), 2 2 2 2 a sum of a symmetric and skew-symmetric matrix, respectively, by Problem 31. 35. If A = [aij ] and D = diag(d1 , d2 , . . . , dn ), then we must show that the (i, j )-entry of DA is di aij . In index notation, we have n (DA)ij = di δik akj = di δii aij = di aij . k=1 Hence, DA is the matrix obtained by multiplying the ith row vector of A by di , where 1 ≤ i ≤ n. 36. (a): We have (AAT )T = (AT )T AT = AAT , so that AAT is symmetric. (b): We have (ABC )T = [(AB )C ]T = C T (AB )T = C T (B T AT ) = C T B T AT , as needed. 37. A (t) = −2e−2t cos t . 147 38. A (t) = 1 − sin t 39. A (t) = et 2et cos t 4 2e2t 8e2t . 2t 10t . − sin t 0 cos t 1 . 3 0 cos t 40. A (t) = sin t 0 41. We show that the (i, j )-entry of both sides of the equation agree. First, recall that the (i, j )-entry of n d AB is k=1 aik bkj , and therefore, the (i, j )-entry of dt (AB ) is (by the product rule) n n n aik bkj + aik bkj = k=1 aik bkj + k=1 aik bkj . k=1 The former term is precise the (i, j )-entry of the matrix dA B , while the latter term is precise the (i, j )-entry dt d of the matrix A dB . Thus, the (i, j )-entry of dt (AB ) is precisely the sum of the (i, j )-entry of dA B and the dt dt (i, j )-entry of A dB . Thus, the equation we are proving follows immediately. dt 42. We have π /2 cos t sin t 0 sin t − cos t dt = π /2 0 sin(π/2) − cos(π/2) = − sin 0 − cos 0 = 1 0 0 −1 − 1 1 . 1 − 1/e 5 − 5/e . = 43. We have 1 et 2et 0 e−t 5e−t dt = 44. We have 1 0 −e−t −5e−t et 2et 1 0 = e −1/e 2e −5/e − − 1 cos 2t 2 t tet − et 1 0 3 − 5t 32 tan t t + cos t 2 1 e2 − cos 2 2 2 2 − 0 = −14/3 0 0 tan 1 3 + cos 1 2 e2t sin 2t t2 − 5 dt = tet 2 sec t 3t − sin t 1 −1 2 −5 = e−1 2e − 2 1 2t 2e 3 e2 −1 1 −2 2 −1 = −14/3 1 tan 1 1−cos 2 2 1 2 . 1 + cos 1 45. We have 1 0 et 2et e2t 4e2t t2 5t2 dt = = = 46. 2t 3t2 dt = t2 t3 . t3 3 53 3t 1 0 e e2 /2 1/3 2e 2e2 5/3 − et 2et 1 2t 2e 2t 2e 1 2 1/2 2 0 0 = e−1 2e − 2 e2 −1 2 2 2e − 2 1/3 5/3 . 148 47. 48. 49. sin t − cos t 0 cos t sin t 3t sin t − cos t 3t2 /2 −e−t . −5e−t 1 2t e2t sin 2t 2e t2 − 5 dt = t3 − 5t tet 3 sec2 t 3t − sin t tan t et 2et e−t 5e−t 0 − cos t t dt = − sin t 1 0 dt = 0 t2 /2 . t et 2et 1 − 2 cos 2t tet − et . 32 2 t + cos t Solutions to Section 2.3 True-False Review: 1. FALSE. The last column of the augmented matrix corresponds to the constants on the right-hand side of the linear system, so if the augmented matrix has n columns, there are only n − 1 unknowns under consideration in the system. 2. FALSE. Three distinct planes can intersect in a line (e.g. Figure 2.3.1, lower right picture). For instance, the xy -plane, the xz -plane, and the plane y = z intersect in the x-axis. 3. FALSE. The right-hand side vector must have m components, not n components. 4. TRUE. If a linear system has two distinct solutions x1 and x2 , then any point on the line containing x1 and x2 is also a solution, giving us inﬁnitely many solutions, not exactly two solutions. 5. TRUE. The augmented matrix for a linear system has one additional column (containing the constants on the right-hand side of the equation) beyond the matrix of coeﬃcients. 6. FALSE. For instance, if A = to AT x = 0 take the form 0 t 0 0 1 0 , then solutions to Ax = 0 take the form . The solution sets are not the same. Problems: 1. 2 · 1 − 3(−1) + 4 · 2 = 13, 1 + (−1) − 2 = −2, 5 · 1 + 4(−1) + 2 = 3. 2. 2 + (−3) − 2 · 1 = −3, 3 · 2 − (−3) − 7 · 1 = 2, 2 + (−3) + 1 = 0, 2 · 2 + 2(−3) − 4 · 1 = −6. 3. (1 − t) + (2 + 3t) + (3 − 2t) = 6, (1 − t) − (2 + 3t) − 2(3 − 2t) = −7, t 0 , while solutions 149 5(1 − t) + (2 + 3t) − (3 − 2t) = 4. 4. s + (s − 2t) − (2s + 3t) + 5t = 0, 2(s − 2t) − (2s + 3t) + 7t = 0, 4s + 2(s − 2t) − 3(2s + 3t) + 13t = 0. 5. The lines 2x + 3y = 1 and 2x + 3y = 2 are system has no solution. 1 1 2 −3 1 6. A = 2 4 −5 , b = 2 , A# = 2 7 7 2 −1 3 parallel in the xy -plane, both with slope −2/3; thus, the 2 −3 1 4 −5 2 . 2 −1 3 11 1 −1 3 11 ,b = , A# = 24 2 4 −3 7 2 0 1 2 −1 1 2 −1 8. A = 2 3 −2 , b = 0 , A# = 2 3 −2 0 5 6 −5 5 6 −5 7. A = 1 −1 3 −3 72 0 0 . 0 . 9. It is acceptable to use any variable names. We will use x1 , x2 , x3 , x4 : x1 − x2 x1 + x2 3x1 + x2 +2x3 + 3x4 −2x3 + 6x4 +4x3 + 2x4 = 1, = −1, = 2. 10. It is acceptable to use any variable names. We will use x1 , x2 , x3 : 2x1 + x2 4x1 − x2 7x1 + 6x2 +3x3 = 3, +2x3 = 1, +3x3 = −5. 11. Given Ax = 0 and Ay = 0, and an arbitrary constant c, (a): we have Az = A(x + y) = Ax + Ay = 0 + 0 = 0 and Aw = A(cx) = c(Ax) = c0 = 0. (b): No, because A(x + y) = Ax + Ay = b + b = 2b = b, and A(cx) = c(Ax) = cb = b in general. 12. x1 x2 = −4 3 6 −4 x1 x2 + 4t t2 . 150 13. x1 x2 = t2 − sin t −t 1 14. x1 x2 = 0 − sin t e2t 0 x1 0 15. x2 = −et x3 −t − sin t 0 t2 x1 x2 . x1 + x2 1 x1 t2 x2 0 x3 0 . 1 t + t3 . 1 16. We have 4e4t −2(4e4t ) x (t) = and e4t −2e4t 2 −1 −2 3 Ax + b = 17. We have x (t) = + 0 0 = 4(−2e−2t ) + 2 cos t 3(−2e−2t ) + sin t = 4e4t −8e4t 2e4t + (−1)(−2e4t ) + 0 −2e4t + 3(−2e4t ) + 0 = = 4e4t −8e4t . −8e−2t + 2 cos t −6e−2t + sin t and Ax + b = = 1 −4 −3 2 4e−2t + 2 sin t 3e−2t − cos t + −2(cos t + sin t) 7 sin t + 2 cos t 4e−2t + 2 sin t − 4(3e−2t − cos t) − 2(cos t + sin t) −3(4e−2t + 2 sin t) + 2(3e−2t − cos t) + 7 sin t + 2 cos t = −8e−2t + 2 cos t −6e−2t + sin t . Solutions to Section 2.4 True-False Review: 1. TRUE. The precise row-echelon form obtained for a matrix depends on the particular elementary row operations (and their order). However, Theorem 2.4.15 states that there is a unique reduced row-echelon form for a matrix. 2. FALSE. Upper triangular matrices could have pivot entries that are not 1. For instance, the following 20 matrix is upper triangular, but not in row echelon form: . 00 3. TRUE. The pivots in a row-echelon form of an n × n matrix must move down and to the right as we look from one row to the next beneath it. Thus, the pivots must occur on or to the right of the main diagonal of the matrix, and thus all entries below the main diagonal of the matrix are zero. 4. FALSE. This would not be true, for example, if A was a zero matrix with 5 rows and B was a nonzero matrix with 4 rows. 5. FALSE. If A is a nonzero matrix and B = −A, then A + B = 0, so rank(A + B ) = 0, but rank(A), rank(B ) ≥ 1 so rank(A)+ rank(B ) ≥ 2. 6. FALSE. For example, if A = B = 1 + 1 = 2. 0 0 1 0 , then AB = 0, so rank(AB ) = 0, but rank(A)+ rank(B ) = 151 7. TRUE. A matrix of rank zero cannot have any pivots, hence no nonzero rows. It must be the zero matrix. 8. TRUE. The matrices A and 2A have the same reduced row-echelon form, since we can move between the two matrices by multiplying the rows of one of them by 2 or 1/2, a matter of carrying out elementary row operations. If the two matrices have the same reduced row-echelon form, then they have the same rank. 9. TRUE. The matrices A and 2A have the same reduced row-echelon form, since we can move between the two matrices by multiplying the rows of one of them by 2 or 1/2, a matter of carrying out elementary row operations. Problems: 1. Row-echelon form. 2. Neither. 3. Reduced row-echelon form. 4. Neither. 5. Reduced row-echelon form. 6. Row-echelon form. 7. Reduced row-echelon form. 8. Reduced row-echelon form. 9. 2 1 1 −3 1 ∼ 1 −3 2 1 2 ∼ 2 −4 −4 8 1 ∼ 1 −2 −4 8 1 1. M1 ( 2 ) 11. 2 14 3 −2 6 1 2 −3 4 ∼ 2 −3 4 3 −2 6 2 14 1 1 5 1 ∼ 0 0 −5 1. P13 12. 2. A21 (−1) 0 0 0 1 1 3 3 ∼ 2. A12 (−2) 1. P12 10. 1 −3 0 7 2 ∼ 1 −3 0 1 , Rank (A) = 2. 1 3. M2 ( 7 ) 1 −2 0 0 , Rank (A) = 1. 2. A12 (4) 1 12 1 12 1 12 3 4 ∼ 2 −3 4 ∼ 0 −5 0 ∼ 0 −1 0 2 14 0 −1 0 0 −5 0 2 112 6 0 ∼ 0 1 0 , Rank (A) = 2. 000 0 2 3. A12 (−2), A13 (−3) 3 01 1 4 ∼ 0 0 5 00 3 0 2 1 ∼ 0 4 0 4. P23 5. M2 (−1) 13 0 1 , Rank (A) = 2. 00 6. A32 (5) 152 1. A12 (−1), A13 (−3) 2. A23 (−4) 13. 2 −1 3 2 1 3 1 3 1 3 1 1 2 3 4 5 3 2 ∼ 2 −1 ∼ 2 −1 ∼ 0 −7 ∼ 0 −1 ∼ 0 2 5 2 5 2 5 0 −1 0 −7 0 1. P12 2. A21 (−1) 3. A12 (−2), A13 (−2) 4. P23 3 1 , Rank (A) = 2. 0 5. M2 (−1), A23 (7). 14. 2 −1 3 3 1 −2 1 1 2 3 1 −2 ∼ 2 −1 3 ∼ 2 2 −2 1 2 −2 1 0 12 1 2 −5 6 5 1 2 ∼ 0 1 ∼ 0 00 0 −5 13 2. A21 (−1), A23 (−1) 1. P12 15. 2 −1 1 −2 1 −5 3 1 0 2 3 1 2 3. A12 (−2) 4. P23 1 2 −5 1 2 −5 4 0 −5 13 ∼ 0 −1 −2 0 −1 −2 0 −5 13 −5 2 , Rank (A) = 3. 1 5. M2 (−1) 1 −2 4 1 −2 1 3 1 −2 1 3 1 2 3 3 ∼ 2 −1 3 4 ∼ 0 3 1 −2 ∼ 0 1 5 1 −5 0 5 0 00 0 0 0 1. P12 16. 2 −5 3 −1 3 ∼ −1 −2 −5 12 7 2 ∼ 0 1 23 00 2. A12 (−2), A13 (−1) 2. A12 (−3), A13 (−2), A14 (−2) 1 1 3 0 3 7. M3 (1/23). 2 − 3 , Rank (A) = 2. 0 3. M2 (1/3) 1 −1 10 1 −1 10 −2 −1 3 3 120 1 01 −2 3 1 1 3 −2 ∼ ∼ 0 −3 3 −1 1 0 2 −2 −1 3 0 0 1 02 2 −1 22 −1 22 1 −1 1 0 10 1 40 , Rank (A) = 4. ∼ 0 0 1 −1 0 00 1 1. P13 6. A23 (5) 1 −1 10 30 1 0 1 ∼ 0 0 −3 3 0 0 01 3. A24 (−1) 4. M3 (1/3) 17. 4 74 7 1 21 2 3 53 513 53 5 2 −2 2 −2 ∼ 2 −2 2 −2 5 −2 5 −2 5 −2 5 −2 1 21 2 1 21 2 2 0 −1 0 −1 3 0 10 1 ∼ 0 −6 0 −6 ∼ 0 −6 0 −6 0 −12 0 −12 0 −12 0 −12 153 1 40 ∼ 0 0 2 1 0 0 1 0 0 0 2 1 , Rank (A) = 2. 0 0 1. A21 (−1) 2. A12 (−3), A13 (−2), A14 (−5) 3. M2 (−1) 4. A23 (6), A24 (12) 18. 21 1 0 23 3 2 1 4 1 5 2 1 1 3 ∼ 2 7 2 0 1 3 2 3 1 1 4 5 3 1 2 2 ∼ 0 7 0 0 21 3 10 2 1 3 3 1 −1 2 −4 ∼ 0 1 −1 2 −4 3 −3 3 1 00 0 −3 1 10 21 3 4 ∼ 0 1 −1 2 −4 , Rank (A) = 3. 1 00 0 1 −3 2. A12 (−2), A13 (−2), 1. P12 19. 3 2 1 −1 1 ∼ 1 −1 3 2 2 ∼ 1. P12 20. 3 2 1 1. P13 21. 1 −1 0 5 3 ∼ 2. A12 (−3) 3. A23 (−3) 1 −1 0 1 4 ∼ 3. M2 ( 1 ) 5 1 0 4. M3 (− 1 ) 3 0 1 = I2 , Rank (A) = 2. 4. A21 (1) 7 10 12 1 1 2 1 121 1 1 2 3 4 3 −1 ∼ 2 3 −1 ∼ 0 −1 −3 ∼ 0 1 3 ∼ 0 2 1 3 7 10 0 1 7 017 0 100 1 0 −5 6 5 3 ∼ 0 1 0 = I3 , Rank (A) = 3. ∼ 0 1 00 1 001 2. A12 (−2), A13 (−3) 3. M2 (−1) 4. A21 (−2), A23 (−1) 1 5. M3 ( 4 ) 0 −5 1 3 0 4 6. A31 (5), A32 (−3) 3 −3 6 1 −1 2 1 2 −2 4 ∼ 0 0 0 , Rank (A) = 1. 6 −6 12 0 00 1. M1 ( 1 ), A12 (−2), A13 (−6) 3 22. 3 5 −12 1 2 −5 1 1 2 2 3 −7 ∼ 0 −1 3 ∼ 0 −2 −1 1 0 3 −9 0 2 −5 10 1 3 1 −3 ∼ 0 1 −3 , Rank (A) = 2. 3 −9 00 0 154 1. A21 (−1), A12 (−2), A13 (2) 23. 1 3 2 4 1 −1 −1 2 −1 −1 2 1 31 −2 0 710 ∼ 1 40 −1 2 4 0 0 2 70 −2 38 1 100 5 450 40 1 0 ∼ 0 0 1 −1 ∼ 0 0 000 1 1. A12 (−3), A13 (−2), A14 (−4) 1 2 3 0 0 0 0 1 0 0 0 0 1 0 1 20 ∼ 0 0 1 2 3 3 130 ∼ 1 −1 0 0 1 −2 0 1 0 0 0 1 0 0 0 5 0 4 1 −1 0 −1 0 0 = I4 , Rank (A) = 4. 0 1 3. A31 (−2), A32 (−3), A34 (−1) 5. A41 (−5), A42 (−4), A43 (1) 3 1 −2 1 3 1 −2 1 3 1 −2 0 1 1 2 3 7 ∼ 0 0 −1 −2 ∼ 0 0 1 2 ∼ 0 0 1 2 , Rank (A) = 2. 10 0 0 −1 −2 0 0 −1 −2 0 000 1. A12 (−3), A13 (−4) 25. 3. A21 (−2), A23 (−3) 2. A21 (1), A23 (−1), A24 (−2) 4. M4 (−1) 24. 1 −2 3 −6 4 −8 2. M2 (−1) 1 3 2 2 1 0 1 1 2 ∼ 1 0 4 ∼ 0 0 1. A12 (−3), A13 (−2) 2. M2 (−1) 01 2 1 01 2 0 0 −6 −2 ∼ 0 0 0 0 −4 −1 00 1 0 1/3 010 5 0 1 1/3 ∼ 0 0 1 00 1 000 2. M2 (− 1 ) 6 3. A21 (−1), A23 (1) 2 1 01 3 1 1/3 ∼ 0 0 −4 −1 00 0 0 , Rank (A) = 3. 1 3. A21 (−2), A23 (4) 4. M3 (3) 0 1 0 1/3 1/3 1/3 1 5. A32 (− 3 ), A31 (− 1 ) 3 Solutions to Section 2.5 True-False Review: 1. FALSE. This process is known as Gaussian elimination. Gauss-Jordan elimination is the process by which a matrix is brought to reduced row echelon form via elementary row operations. 2. TRUE. A homogeneous linear system always has the trivial solution x = 0, hence it is consistent. 3. TRUE. The columns of the row-echelon form that contain leading 1s correspond to leading variables, while columns of the row-echelon form that do not contain leading 1s correspond to free variables. 4. TRUE. If the last column of the row-reduced augmented matrix for the system does not contain a pivot, then the system can be solved by back-substitution. On the other hand, if this column does contain 155 a pivot, then that row of the row-reduced matrix containing the pivot in the last column corresponds to the impossible equation 0 = 1. 5. FALSE. The linear system x = 0, y = 0, z = 0 has a solution in (0, 0, 0) even though none of the variables here is free. 6. FALSE. The columns containing the leading 1s correspond to the leading variables, not the free variables. Problems: For the problems of this section, A will denote the coeﬃcient matrix of the given system, and A# will denote the augmented matrix of the given system. 1. Converting the given system of equations to an augmented obtain the following equivalent matrices: 1 1211 1 2 1 12 1 2 3 5 1 3 ∼ 0 −1 −2 0 ∼ 0 1 2671 0 2 5 −1 02 1. A12 (−3), A13 (−2) matrix and using Gaussian elimination we 1 1 1 121 3 2 0 ∼ 0 1 2 0 . 5 −1 0 0 1 −1 2. M2 (−1) 3. A23 (−2) The last augmented matrix results in the system: x1 + 2x2 + x3 = 1, x2 + 2x3 = 0, x3 = −1. By back substitution we obtain the solution (−2, 2, −1). 2. Converting the given system of equations to an augmented matrix and using Gaussian elimination, we obtain the following equivalent matrices: 1 −2 −5 −3 1 −2 −5 −3 1 3 −1 0 1 2 2 1 5 4 ∼ 2 1 5 4 ∼ 0 5 15 10 7 −5 −8 −3 7 −5 −8 −3 0 9 27 18 1 −2 −5 −3 1011 3 4 1 3 2 ∼ 0 1 3 2 . ∼ 0 0 9 27 18 0000 1. A21 (−1) 2. A12 (−2), A13 (−7) 3. M2 ( 1 ) 5 4. A21 (2), A23 (−9) The last augmented matrix results in the system: x1 + x3 = 1, x2 + 3x3 = 2. Let the free variable x3 = t, a real number. By back substitution we ﬁnd that the system has the solution set {(1 − t, 2 − 3t, t) : for all real numbers t}. 3. Converting the given system of equations to an augmented matrix and using Gaussian elimination we obtain the following equivalent matrices: 156 3 5 −1 14 12 1 1 2 1 3 ∼ 3 5 25 62 25 1 4 ∼ 0 0 1. P12 13 1 2 −1 4 ∼ 0 62 0 21 3 5 14 5 ∼ 0 0 −9 2. A12 (−3), A13 (−2) 3 3 2 1 121 3 −1 −4 −5 ∼ 0 1 4 5 1 4 −4 0 1 4 −4 1213 0 1 4 5 . 0001 3. M2 (−1) 4. A23 (−1) 1 5. M4 (− 9 ) This system of equations is inconsistent since 2 = rank(A) < rank(A# ) = 3. 4. Converting the given system of equations to an augmented matrix and using Gaussian elimination we obtain the following equivalent matrices: 1 1 6 −3 3 12 2 1 −2 −1 1 −2 2 1 1 2 −1 ∼ 2 −1 ∼ 0 1 4 1 4 0 −4 2 −2 −8 −4 2 −2 −8 0 0 1. M1 ( 1 ) 6 1 2 0 0 2 0 . 0 2. A12 (−2), A13 (4) Since x2 and x3 are free variables, let x2 = s and x3 = t. The single equation obtained from the augmented matrix is given by x1 − 1 x2 + 1 x3 = 2. Thus, the solution set of our system is given by 2 2 {(2 + s t − , s, t) : s, t any real numbers }. 22 5. Converting the given system of equations to an augmented matrix and using Gaussian elimination we obtain the following equivalent matrices: 2 −1 3 14 3 1 −2 −1 1 2 −5 −15 3 1 −2 −1 1 2 −1 3 14 2 2 −1 3 −14 ∼ ∼ 7 2 −3 3 7 2 −3 3 7 2 −3 3 5 −1 −2 5 5 −1 −2 5 5 −1 −2 5 1 2 4 0 −12 ∼ 0 −5 0 −11 12 70 1 ∼ 00 00 1. P12 5. A42 (−1) 1 2 −5 −15 30 −5 13 44 ∼ 0 −12 32 108 0 −11 23 80 −5 −15 1 2 −5 −15 1 2 −5 −15 32 108 5 0 −1 9 28 6 0 1 −9 −28 ∼ ∼ 13 44 0 −5 13 44 0 −5 13 44 23 80 0 −11 23 80 0 −11 23 80 −5 −15 1 2 −5 −15 1 2 −5 −15 −9 −28 8 0 1 −9 −28 9 0 1 −9 −28 ∼ ∼ . −32 −96 0 0 32 96 0 0 1 3 −76 −228 0 0 −76 −228 00 0 0 2. A21 (−1) 6. M2 (−1) 3. A12 (−2), A13 (−7), A14 (−5) 7. A23 (5), A24 (11) 4. P23 8. M3 (−1) 1 9. M3 ( 32 ), A34 (76). 157 The last augmented matrix results in the system of equations: x1 − 2x2 − 5x3 = −15, x2 − 9x3 = −28, x3 = 3. Thus, using back substitution, the solution set for our system is given by {(2, −1, 3)}. 6. Converting the given system of equations to an augmented obtain the following equivalent matrices: 2 −1 −4 1 1 −3 −3 1 1 5 3 2 −5 813 2 −5 8 2 0 −1 ∼ ∼ 5 6 −6 20 5 6 −6 20 0 1 1 1 −3 −3 2 −1 −4 −5 0 −3 1 1 −3 1 1 −3 −3 −3 −4 −17 5 0 1 −4 −17 40 1 ∼ ∼ 0 0 13 52 0 0 1 4 0 0 −10 −40 0 0 −10 −40 1. P14 2. A12 (−3), A13 (−5), A14 (−2) 3. M2 (−1) matrix and using Gaussian elimination we −3 −3 4 17 9 35 2 11 1 60 ∼ 0 0 1 1 −3 −3 30 1 −4 −17 ∼ 0 35 1 9 0 −3 2 11 1 −3 −3 1 −4 −17 . 0 1 4 0 0 0 4. A23 (−1), A24 (3) 1 5. M3 ( 13 ) 6. A34 (10) The last augmented matrix results in the system of equations: x1 + x2 − 3x3 = − 3, x2 − 4x3 = −17, x3 = 4. By back substitution, we obtain the solution set {(10, −1, 4)}. 7. Converting the given system of equations to an augmented matrix obtain the following equivalent matrices: 1 2 −1 1 1 2 −1 1 1 1 2 4 −2 2 2 ∼ 0 0 00 5 10 −5 5 5 00 00 and using Gaussian elimination we 1 0 . 0 1. A12 (−2), A13 (−5) The last augmented matrix results in the equation x1 + 2x3 − x3 + x4 = 1. Now x2 , x3 , and x4 are free variables, so we let x2 = r, x3 = s, and x4 = t. It follows that x1 = 1 − 2r + s − t. Consequently, the solution set of the system is given by {(1 − 2r + s − t, r, s, t) : r, s, t and real numbers }. 8. Converting the given system of equations to an augmented obtain the following equivalent matrices: 1 2 −1 11 1 2 −1 1 1 2 −3 1 −1 2 1 0 −7 3 −3 0 ∼ 1 −5 2 −2 1 0 −7 3 −3 0 4 1 −1 13 0 −7 3 −3 −1 matrix and using Gaussian elimination we 1 1 2 −1 1 3 3 20 1 −7 0 7 ∼ 0 −7 3 −3 0 0 −7 3 −3 −1 158 1 2 −1 3 3 0 1 −7 ∼ 00 0 00 0 1 3 7 0 0 1 1 040 ∼ 0 0 −1 0 1. A12 (−2), A13 (−1), A14 (−4) 2 −1 1 −3 7 0 0 0 0 1 2. M2 (− 7 ) 1 1 050 ∼ −1 0 0 0 1 3 7 0 0 2 −1 1 −3 7 0 0 0 0 3. A23 (7), A24 (7) 4. P34 1 3 7 0 0 1 0 . 1 0 5. M3 (−1) The given system of equations is inconsistent since 2 = rank(A) < rank(A# ) = 3. 9. Converting the given system of equations to an augmented matrix and using Gauss-Jordan elimination we obtain the following equivalent matrices: 12 1 1 −2 3 12 1 1 −2 3 1 2 1 1 −2 3 1 2 0 0 1 4 −3 2 ∼ 0 0 1 4 −3 2 ∼ 0 0 1 4 −3 2 . 2 4 −1 −10 50 0 0 −3 −12 9 −6 0000 00 1. A13 (−2) 2. A23 (3) The last augmented matrix indicates that the ﬁrst two equations of the initial system completely determine its solution. We see that x4 and x5 are free variables, so let x4 = s and x5 = t. Then x3 = 2 − 4x4 + 3x5 = 2 − 4s +3t. Moreover, x2 is a free variable, say x2 = r, so then x1 = 3 − 2r − (2 − 4s +3t) − s +2t = 1 − 2r +3s − t. Hence, the solution set for the system is {(1 − 2r + 3s − t, r, 2 − 4s + 3t, s, t) : r, s, t any real numbers }. 10. Converting the given system of equations to an augmented matrix and using Gauss-Jordan elimination we obtain the following equivalent matrices: 2 −1 −2 2 4 4 1 4 1 1 4 1 1 2 4 3 −2 −1 ∼ 4 3 −2 −1 ∼ 0 −13 −6 −17 1 4 1 4 2 −1 −1 2 0 −9 −3 −6 1 4 1 4 1 4 1 4 1 4 1 4 5 6 12 4 8 ∼ 0 12 4 8 ∼ 0 −1 −2 ∼ 0 0 −13 −6 −17 0 −1 −2 −9 0 12 4 1 0 −7 −32 1 0 −7 −32 1 8 9 10 2 9 ∼ 0 1 2 9 ∼ 0 ∼ 0 1 0 0 −20 −100 00 1 5 0 1. P13 6. P23 2. A12 (−4), A13 (−2) 7. M2 (−1) 3. P23 8. A21 (−4), A23 (−12) 4. M2 (− 4 ) 3 9. 1 4 1 3 ∼ 0 −9 −3 0 −13 −6 4 141 7 −9 ∼ 0 1 2 0 12 4 8 00 3 1 0 −1 . 01 5 4 −6 −17 4 9 8 5. A23 (1) 1 M3 (− 20 ) 10. A31 (7), A32 (−2) The last augmented matrix results in the solution (3, −1, 5). 11. Converting the given system of equations to an augmented matrix and using Gauss-Jordan elimination we obtain the following equivalent matrices: 31 52 1 1 −1 1 1 1 −1 1 1 2 1 1 −1 1 ∼ 3 1 5 2 ∼ 0 −2 8 −1 21 23 21 23 0 −1 4 1 159 1 1 −1 1 −4 ∼ 0 0 −1 4 3 1 1 1 −1 1 ∼ 0 1 −4 1/2 . 2 00 0 3/2 1 1 4 We can stop here, since we see from this last augmented matrix that the system is inconsistent. In particular, 2 = rank(A) < rank(A# ) = 3. 1. P12 2. A12 (−3), A13 (−2) 1 3. M2 (− 2 ) 4. A23 (1) 12. Converting the given system of equations to an augmented matrix and using Gauss-Jordan elimination we obtain the following equivalent matrices: 1 0 −2 −3 1 0 −2 −3 1 0 −2 −3 1 0 −2 −3 1 2 3 3 −2 0 ∼ 0 0 ∼ 0 1 −1 0 4 −9 ∼ 0 −2 2 1 −1 1 −4 2 −3 0 −4 4 0 −4 4 00 0 0 0 0 . 1 2. M2 (− 2 ) 1. A12 (−3), A13 (−1) 3. A23 (4) The last augmented matrix results in the following system of equations: x1 − 2x3 = −3 and x2 − x3 = 0. Since x3 is free, let x3 = t. Thus, from the system we obtain the solutions {(2t − 3, t, t) : t any real number }. 13. Converting the given system of equations to an augmented matrix and using Gauss-Jordan elimination we obtain the following equivalent matrices: 2 −1 3 −1 3 6 6 1 −2 3 1 1 −2 3 1 1 2 3 2 1 −5 −6 ∼ 3 2 1 −5 −6 ∼ 0 8 −8 −8 −24 1 −2 3 1 6 2 −1 3 −1 3 0 3 −3 −3 −9 6 0 10 1 −1 1 −2 3 1 4 3 1 −1 −1 −3 ∼ 0 1 −1 −1 −3 . ∼ 0 0 3 −3 −3 −9 00 0 0 0 1. P13 2. A12 (−3), A13 (−2) 1 3. M2 ( 8 ) 4. A21 (2), A23 (−3) The last augmented matrix results in the following system of equations: x1 + x3 − x4 = 0 and x2 − x3 − x4 = −3. Since x3 and x4 are free variables, we can let x3 = s and x4 = t, where s and t are real numbers. It follows that the solution set of the system is given by {(t − s, s + t − 3, s, t) : s, t any real numbers }. 14. Converting the given system of equations to an augmented matrix and using we obtain the following equivalent matrices: 1 1 1 −1 11 1 1 1 −1 4 4 1 −1 −1 −1 2 1 0 −2 −2 0 −2 2 0 1 ∼ ∼ 1 1 −1 1 −2 0 0 −2 2 −6 0 0 1 −1 1 1 −8 0 −2 0 2 −12 01 Gauss-Jordan elimination 1 −1 4 1 0 1 1 −1 3 0 −1 6 160 10 0 −1 1 0 30 1 ∼ 00 1 −1 0 0 −1 −1 1 0 0 −1 3 3 140 1 0 1 −2 5 ∼ ∼ 0 0 1 −1 3 3 5 8 0 0 0 −2 1. A12 (−1), A13 (−1), A14 (−1) 4. A32 (−1), A34 (1) 5. 1 0 0 −1 3 010 1 −2 6 ∼ 0 0 1 −1 3 000 1 −4 1 1 1 2. M2 (− 2 ), M3 (− 2 ), M4 (− 2 ) 1 M4 (− 2 ) 1 0 0 0 −1 0100 2 . 0 0 1 0 −1 0 0 0 1 −4 3. A24 (−1) 6. A41 (1), A42 (−1), A43 (1) It follows from the last augmented matrix that the solution to the system is given by (−1, 2, −1, −4). 15. Converting the given system of equations to an augmented matrix and using Gauss-Jordan elimination we obtain the following equivalent matrices: 1 −3 −2 −1 −2 1 −3 −2 −1 −2 2 −1 3 1 −1 11 2 2 1 −3 −2 −1 −2 2 2 −1 3 1 −1 11 0 5 7 3 3 7 1 2 ∼ 3 ∼ 0 10 3 1 −2 −1 1 −2 1 −2 −1 1 −2 4 2 7 −8 1 2 1 2 3 −3 1 2 1 2 3 −3 0 5 3 3 5 −5 5 −3 −3 1 2 2 5 −3 −3 1 2 2 0 12 7 6 12 −8 11 4 1 31 4 31 11 10 −5 1 −3 −2 −1 −2 2 −1 10 5 5 5 5 5 5 5 7 7 3 3 7 3 3 7 3 3 7 7 0 0 1 0 1 1 5 5 5 5 5 5 5 5 4 5 5 5 5 5 3 2 1 11 ∼ 0 0 ∼ 0 0 −10 −4 4 2 7 −8 1 −22 1 − 10 ∼ 0 10 5 5 0 5 3 3 5 −5 0 0 −4 0 2 −12 0 0 −4 0 2 −12 6 24 49 24 0 12 7 6 12 −8 − 124 0 0 − 5 −5 0 0 − 49 − 6 − 124 5 5 5 5 5 5 2 1 34 2 1 34 1 6 1 0 0 − 25 1 0 0 − 25 1 0 0 0 10 50 25 50 25 5 1 37 42 1 37 42 7 0 1 0 0 1 0 0 1 0 0 − 25 − 25 −8 25 50 25 50 10 5 6 7 8 2 1 11 2 1 1 11 − 10 3 − 10 ∼ 0 0 1 5 5 ∼ 0 0 1 5 5 ∼ 0 0 1 0 −2 8 8 16 0 0 0 0 0 0 0 0 0 1 1 −2 1 1 −2 −5 5 5 11 68 191 11 68 191 000 − 81 0 0 0 0 10 000 − 81 25 5 25 50 25 25 50 1 6 1 0 0 0 10 10000 1 5 7 8 0 1 0 0 − 5 0 1 0 0 0 −3 10 10 9 0 0 1 0 −1 4 3 ∼ 0 0 1 0 0 ∼ 2 0 0 0 1 1 −2 0 0 0 1 0 −4 00001 2 2 0000 1 1. P12 8. 2. A12 (−2), A13 (−3), A14 (−1), A15 (−5) 2 A41 ( 25 ), 3. M2 ( 1 ) 5 1 5. M3 (− 10 ) 6. A31 (− 11 ), A32 (− 7 ), 5 5 1 68 A42 (− 25 ), A43 (− 2 ), A45 (− 25 ) 9. M5 ( 10 ) 5 11 4. A21 (3), A23 (−10), A24 (−5), A25 (−12) 5 A34 (4), A35 ( 49 ) 7. M4 ( 8 ) 5 1 7 10. A51 (− 10 ), A52 (− 10 ), A53 ( 1 ), A54 (−1) 2 It follows from the last augmented matrix that the solution to the system is given by (1, −3, 4, −4, 2). 16. The equation Ax = b reads 1 −3 1 x1 8 5 −4 1 x2 = 15 . 2 4 −3 x3 −4 Converting the given system of equations to an augmented matrix and using Gauss-Jordan elimination we obtain the following equivalent matrices: 1 −3 1 8 1 −3 1 8 1 −3 1 8 1 2 5 −4 1 15 ∼ 0 11 −4 −25 ∼ 0 1 1 −5 2 4 −3 −4 0 10 −5 −20 0 10 −5 −20 161 1 10 4 −7 1 0 4 −7 100 4 5 1 −5 ∼ 0 1 1 −5 ∼ 0 1 0 −3 . ∼ 0 1 0 0 −15 30 0 0 1 −2 0 0 1 −2 3 1. A12 (−5), A13 (−2) 2. A32 (−1) 1 4. M3 (− 15 ) 3. A21 (3), A23 (−10) 5. A31 (−4), A32 (−1) Thus, from the last augmented matrix, we see that x1 = 1, x2 = −3, and x3 = −2. 17. The equation Ax = b reads 1 0 5 x1 0 3 −2 11 x2 = 2 . 2 −2 6 x3 2 Converting the given system of equations to an augmented matrix and using Gauss-Jordan elimination we obtain the following equivalent matrices: 1 0 50 1 0 5 1 0 50 0 1 2 3 −2 11 2 ∼ 0 −2 −4 2 ∼ 0 1 2 −1 2 −2 62 0 −2 −4 2 0 −2 −4 2 105 0 3 ∼ 0 1 2 −1 . 000 0 1. A12 (−3), A13 (−2) 2. M2 (−1/2) 3. A23 (2) Hence, we have x1 + 5x3 = 0 and x2 + 2x3 = −1. Since x3 is a free variable, we can let x3 = t, where t is any real number. It follows that the solution set for the given system is given by {(−5t, −2t − 1, t) : t ∈ R}. 18. The equation Ax = b reads x1 −2 0 1 −1 0 5 1 x2 = 8 . 5 02 1 x3 Converting the given system of equations to an augmented matrix using Gauss-Jordan elimination we obtain the following equivalent matrices: 0 1 −1 −2 0 1 −1 −2 0 1 −1 −2 0101 1 2 3 0 5 1 8 ∼ 0 0 6 18 ∼ 0 0 1 3 ∼ 0 0 1 3 . 02 1 5 00 3 9 00 3 9 0000 1. A12 (−5), A13 (−2) 2. M2 (1/6) 3. A21 (1), A23 (−3) Consequently, from the last augmented matrix it follows that the solution set for the matrix equation is given by {(t, 1, 3) : t ∈ R}. 19. The equation Ax = b reads 1 −1 0 −1 x1 2 2 13 7 x2 = 2 . 3 −2 1 0 x3 4 162 Converting the given system of equations to an augmented matrix and using Gauss-Jordan obtain the following equivalent matrices: 1 −1 0 −1 2 1 −1 0 −1 2 1 −1 0 −1 2 10 1 2 3 2 13 7 2 ∼ 0 33 9 −2 ∼ 0 11 3 −2 ∼ 0 1 3 −2 1 04 0 11 3 −2 0 33 9 −2 00 1. A12 (−2), A13 (−3) 2. P23 elimination we 12 0 1 3 −2 . 00 4 3. A21 (1), A23 (−3) From the last row of the last augmented matrix, it is clear that the given system is inconsistent. 20. The equation Ax = b reads 11 0 −1 x1 3 1 −2 3 x2 23 1 1 x3 −2 3 5 −2 x4 Converting the given system of equations to obtain the following equivalent matrices: 11 0 1 1 1 0 2 3 1 −2 3 8 1 0 −2 −2 ∼ 23 3 0 1 2 1 1 −2 3 5 −2 −9 0 5 5 2 8 = 3 . −9 an augmented matrix and using Gauss-Jordan elimination we 1 1 1 01 1 0 −1 1 2 2 3 0 1 1 0 −1 3 0 1 1 0 −1 220 ∼ ∼ . 2 0 0 0 −1 0 −2 −2 0 00 0 −5 0 5 5 0 −5 00 00 0 1. A12 (−3), A13 (−2), A14 (2) 2. P23 3. A21 (−1), A23 (2), A24 (−5) From the last augmented matrix, we obtain the system of equations: x1 − x3 + x4 = 3, x2 + x3 = −1. Since both x3 and x4 are free variables, we may let x3 = r and x4 = t, where r and t are real numbers. The solution set for the system is given by {(3 + r − t, −r − 1, r, t) : r, t ∈ R}. 21. Converting the given system of equations to an augmented matrix and using Gauss-Jordan elimination we obtain the following equivalent matrices: 1 2 −1 12 −1 12 −1 3 3 3 1 2 2 5 ∼ 0 1 . 7 ∼ 0 1 1 1 1 3 3 2 2 2 1 1 −k −k 0 −1 1 − k −3 − k 0 0 4 − k −2 − k 1. A12 (−2), A13 (−1) 2. A23 (1) (a): If k = 2, then the last row of the last augmented matrix reveals an inconsistency; hence the system has no solutions in this case. (b): If k = −2, then the last row of the last augmented matrix consists entirely of zeros, and hence we have only two pivots (ﬁrst two columns) and a free variable x3 ; hence the system has inﬁnitely many solutions. (c): If k = ±2, then the last augmented matrix above contains a pivot for each variable x1 , x2 , and x3 , and can be solved for a unique solution by back-substitution. 163 22. Converting the given system of equations to an augmented matrix and using Gauss-Jordan elimination we obtain the following equivalent matrices: 1 1 1 −1 0 1 1 1 −1 0 2 1 −1 10 1 1 1 −1 0 1 2 1 −1 1 0 2 0 −1 −3 3 0 ∼ ∼ 4 2 −1 1 0 4 2 −1 1 0 0 −2 −5 5 0 3 −1 1 k0 3 −1 1 k0 0 −4 −2 k + 3 0 1 1 1 −1 0 1 3 −3 3 ∼ 0 −2 −5 5 0 −4 −2 k + 3 1. P12 1 0 040 ∼ 0 0 0 0 1 1 0 0 2. A12 (−2), A13 (−4), A14 (−3) 1 −1 3 −3 1 −1 10 k − 9 3. M2 (−1) 1 0 050 ∼ 0 0 0 0 1 1 0 0 1 −1 3 −3 1 −1 0 k+1 4. A23 (2), A24 (4) 0 0 . 0 0 5. A34 (−10) (a): Note that the trivial solution (0, 0, 0, 0) exists under all circumstances, so there are no values of k for which there is no solution. (b): From the last row of the last augmented matrix, we see that if k = −1, then the variable x4 corresponds to an unpivoted column, and hence it is a free variable. In this case, therefore, we have inﬁnitely solutions. (c): Provided that k = −1, then each variable in the system corresponds to a pivoted column of the last augmented matrix above. Therefore, we can solve the system by back-substitution. The conclusion from this is that there is a unique solution, (0, 0, 0, 0). 23. Converting the given system of equations to an augmented matrix and using we obtain the following equivalent matrices: 1 1 −2 4 4 4 1 1 −2 1 1 −2 1 2 3 3 5 −4 16 ∼ 0 2 2 4 ∼ 0 1 1 2 ∼ 2 3 −a b 0 1 4−a b−8 0 1 4−a b−8 1. A12 (−3), A13 (−2) 1 2. M2 ( 2 ) Gauss-Jordan elimination 2 1 0 −3 . 01 1 2 0 0 3 − a b − 10 3. A21 (−1), A23 (−1) (a): From the last row of the last augmented matrix above, we see that there is no solution if a = 3 and b = 10. (b): From the last row of the augmented matrix above, we see that there are inﬁnitely many solutions if a = 3 and b = 10, because in that case, there is no pivot in the column of the last augmented matrix corresponding to the third variable x3 . (c): From the last row of the augmented matrix above, we see that if a = 3, then regardless of the value of b, there is a pivot corresponding to each variable x1 , x2 , and x3 . Therefore, we can uniquely solve the corresponding system by back-substitution. 24. Converting the given system of equations we obtain the following equivalent matrices: 1 −a 2 1 −3 a + b to an augmented matrix and using Gauss-Jordan elimination 3 1 −a 3 1 6 ∼ 0 1 + 2a 0 . 1 0 b − 2a 10 164 From the middle row, we see that if a = − 1 , then we must have x2 = 0, but this leads to an inconsistency in 2 1 solving for x1 (the ﬁrst equation would require x1 = 3 while the last equation would require x1 = − 3 . Now 1 −1/2 3 1 . If b = −1, suppose that a = − 2 . Then the augmented matrix on the right reduces to 0 b + 1 10 then once more we have an inconsistency in the last row. However, if b = −1, then the row-echelon form obtained has full rank, and there is a unique solution. Therefore, we draw the following conclusions: 1 1 (a): There is no solution to the system if a = − 2 or if a = − 2 and b = −1. (b): Under no circumstances are there an inﬁnite number of solutions to the linear system. (c): There is a unique solution if a = − 1 and b = −1. 2 25. The corresponding augmented matrix for this linear system can be reduced to row-echelon form via 1 1 1 y1 11 1 y1 11 1 y1 1 2 2 3 1 y2 ∼ 0 1 −1 y2 − 2y1 ∼ 0 1 −1 . y2 − 2y1 3 5 1 y3 0 2 −2 y3 − 3y1 00 0 y 1 − 2y 2 + y 3 1. A12 (−2), A13 (−3) 2. A23 (−2) For consistency, we must have rank(A) = rank(A# ), which requires (y1 , y2 , y3 ) to satisfy y1 − 2y2 + y3 = 0. If this holds, then the system has an inﬁnite number of solutions, because the column of the augmented matrix corresponding to y3 will be unpivoted, indicating that y3 is a free variable in the solution set. 26. Converting the given system of equations to an augmented matrix and using Gaussian elimination we obtain the following row-equivalent matrices. Since a11 = 0: a11 a21 a12 a22 b1 b2 1 ∼ 1 0 a12 a11 a22 a11 −a21 a12 a11 1. M1 (1/a11 ), A12 (−a21 ) b1 a11 a11 b2 −a21 b1 a11 2 ∼ 1 0 a12 a11 ∆ a11 b1 a11 ∆2 a11 . 2. Deﬁnition of ∆ and ∆2 (a): If ∆ = 0, then rank(A) = rank(A# ) = 2, so the system has a unique solution (of course, we are assuming ∆2 a11 = 0 here). Using the last augmented matrix above, a∆ x2 = a11 , so that x2 = ∆2 . Using this, we can ∆ 11 solve x1 + a12 a11 x2 = b1 a11 for x1 to obtain x1 = ∆1 ∆, where we have used the fact that ∆1 = a22 b1 − a12 b2 . a12 b1 1 a11 a11 , so it follows that 00 ∆2 the system has (i) no solution if ∆2 = 0, since rank(A) < rank(A# ) = 2, and (ii) an inﬁnite number of solutions if ∆2 = 0, since rank(A# ) < 2. (b): If ∆ = 0 and a11 = 0, then the augmented matrix of the system is (c): An inﬁnite number of solutions would be represented as one line. No solution would be two parallel lines. A unique solution would be the intersection of two distinct lines at one point. 27. We ﬁrst use the partial pivoting algorithm 3 1211 1 3 5 1 3 ∼ 1 2671 2 to reduce the augmented matrix of the system: 513 3 5 1 3 2 2 1 1 ∼ 0 1/3 2/3 0 671 0 8/3 19/3 −1 165 3 1 3 4 19/3 −1 ∼ 0 2/3 0 0 1. P12 2. A12 (−1/3), A13 (−2/3) 3 5 ∼ 0 8/3 0 1/3 3 3 5 1 8/3 19/3 −1 . 0 −1/8 1/8 3. P23 4. A23 (−1/8) Using back substitution to solve the equivalent system yields the unique solution (−2, 2, −1). 28. We ﬁrst use the partial pivoting algorithm to reduce the augmented matrix of the system: 2 −1 3 14 7 2 −3 7 2 −3 3 3 3 1 −2 −1 1 3 1 −2 −1 2 0 1/7 −5/7 −16/7 ∼ ∼ 7 3 2 −1 92/7 2 −3 3 14 0 −11/7 27/7 5 −1 −2 5 −1 −2 0 −17/7 1/7 5 5 20/7 7 2 −3 7 2 −3 3 3 1/7 20/7 4 0 −17/7 1/7 20/7 3 0 −17/7 ∼ ∼ 92/7 0 192/17 0 −11/7 27/7 0 64/17 0 1/7 −5/7 −16/7 0 0 −12/17 −36/17 7 2 −3 3 1/7 20/7 5 0 −17/7 . ∼ 0 0 64/17 192/17 0 0 0 0 1. P13 2. A12 (−3/7), A13 (−2/7), A14 (−5/7) 4. A23 (−11/17), A24 (1/17) 3. P24 5. A34 (3/16) Using back substitution to solve the equivalent system yields the unique solution (2, −1, 3). 29. We ﬁrst use the partial pivoting algorithm to 5 6 2 −1 −4 5 3 2 −5 813 2 ∼ 5 6 −6 20 2 −1 1 1 −3 −3 1 1 reduce the augmented matrix of the −6 −20 5 6 −6 −5 8 2 0 −8/5 −7/5 ∼ 5 0 −17/5 −8/5 −4 −3 −3 0 −1/5 −9/5 5 6 −6 20 5 3 0 −17/5 −8/5 −3 4 0 ∼ 0 −8/5 −7/5 −4 ∼ 0 0 −1/5 −9/5 −7 0 5 6 −6 20 0 −17/5 −8/5 6 −3 5 ∼ ∼ 0 0 −29/17 −116/17 0 0 −11/17 −44/17 1. P13 6 −6 20 −17/5 −8/5 −3 0 −11/17 −44/17 0 −29/17 −116/17 5 6 −6 20 0 −17/5 −8/5 −3 . 0 0 −29/17 −116/17 0 0 0 0 2. A12 (−3/5), A13 (−2/5), A14 (−1/5) 4. A23 (−8/17), A24 (−1/17) system: 20 −4 −3 −7 5. P34 3. P23 6. A34 (−11/29) 166 Using back substitution to solve the equivalent system yields the unique solution (10, −1, 4). 30. We ﬁrst use the partial pivoting algorithm to reduce the augmented matrix of the system: 2 2 −1 −1 4 3 −2 −1 4 3 −2 −1 1 2 4 3 −2 −1 ∼ 2 −1 −1 2 ∼ 0 −5/2 0 5/2 4 4 1 4 1 1 4 1 0 13/4 3/2 17/4 4 3 −2 −1 −1 4 3 −2 3 4 17/4 . 3/2 ∼ 0 13/4 3/2 17/4 ∼ 0 13/4 0 −5/2 0 5/2 0 0 15/13 75/13 1. P12 2. A12 (−1/2), A13 (−1/4) 3. P23 4. A23 (10/13) Using back substitution to solve the equivalent system yields the unique solution (3, −1, 5). 31. (a): Let A = # a11 a21 a31 ... an1 0 a22 a32 ... an2 0 0 a33 ... an3 ... ... ... ... ... 0 0 0 ... ann b1 b2 b3 ... bn represent the corresponding augmented matrix of the given system. Since a11 x1 = b1 , we can solve for x1 easily: b1 x1 = , (a11 = 0). a11 Now since a21 x1 + a22 x2 = b2 , by using the expression for x1 we just obtained, we can solve for x2 : x2 = a11 b2 − a21 b1 . a11 a22 In a similar manner, we can solve for x3 , x4 , . . . , xn . (b): We solve instantly for x1 from the ﬁrst equation: x1 = 2. Substituting this into the middle equation, we obtain 2 · 2 − 3 · x2 = 1, from which it quickly follows that x2 = 1. Substituting for x1 and x2 in the bottom equation yields 3 · 2 + 1 − x3 = 8, from which it quickly follows that x3 = −1. Consequently, the solution of the given system is (2, 1, −1). 32. This system of equations is not linear in x1 , x2 , and x3 ; however, the system is linear in x3 , x2 , and x3 , 1 2 so we can ﬁrst solve for x3 , x2 , and x3 . Converting the given system of equations to an augmented matrix 1 2 and using Gauss-Jordan elimination we obtain the following equivalent matrices: 4 2 3 12 1 −1 12 1 −1 1 2 1 2 1 −1 1 2 ∼ 4 2 3 12 ∼ 0 6 −1 4 3 1 −1 2 3 1 −1 2 0 4 −4 −4 1 −1 1 2 1 −1 1 2 10 0 1 3 4 5 4 −4 −4 ∼ 0 1 −1 −1 ∼ 0 1 −1 −1 ∼ 0 4 4 0 6 −1 0 6 −1 00 5 10 10 0 1 1001 6 7 ∼ 0 1 −1 −1 ∼ 0 1 0 1 . 2 00 1 0012 167 2. A12 (−4), A13 (−3) 1. P12 5. A21 (1), A23 (−6) 3. P23 6. M2 (1/5) 4. M2 (1/4) 7. A32 (1) Thus, taking only real solutions, we have x3 = 1, x2 = 1, and x3 = 2. Therefore, x1 = 1, x2 = ±1, and 1 2 x3 = 2, leading to the two solutions (1, 1, 2) and (1, −1, 2) to the original system of equations. There is no contradiction of Theorem 2.5.9 here since, as mentioned above, this system is not linear in x1 , x2 , and x3 . 33. Reduce the augmented matrix of the system: 3 2 −1 0 1 1 −2 0 1 1 −2 0 10 30 1 2 3 2 1 1 0 ∼ 0 −1 5 0 ∼ 0 1 −5 0 ∼ 0 1 −5 0 5 −4 10 0 −9 11 0 0 −9 11 0 0 0 −34 0 1000 10 30 5 4 ∼ 0 1 −5 0 ∼ 0 1 0 0 . 00 10 0010 1. A21 (−1), A12 (−2), A13 (−5) 4. M3 (−1/34) 2. M2 (−1) 3. A21 (−1), A23 (9) 5. A31 (−3), A32 (5) Therefore, the unique solution to this system is x1 = x2 = x3 = 0: (0, 0, 0). 34. Reduce the augmented matrix of the system: 2 1 −1 0 1 −1 −1 0 3 −1 2 0 1 3 −1 20 1 −1 −1 0 ∼ 2 1 −1 0 5 2 −2 0 5 2 −2 0 1 −1 −1 1 −4 40 ∼ 0 2 5 0 7 3 1. P13 0 1 0 −5 0 5 0 1 −4 ∼ 0 0 0 13 0 0 0 31 1 −1 −1 20 2 5 ∼ 0 3 1 0 7 3 0 1 060 ∼ 0 0 0 0 2. A12 (−3), A13 (−2), A14 (−5) 5. A21 (1), A23 (−2), A24 (−7) 6. M3 (1/13) 0 03 ∼ 0 0 1 −1 −1 0 3 1 0 2 5 0 7 3 0 −5 0 1 1 −4 0 7 0 ∼ 0 1 0 0 0 31 0 0 3. P23 0 1 0 0 0 0 1 0 4. A32 (−1) 7. A31 (5), A32 (4), A34 (−31) Therefore, the unique solution to this system is x1 = x2 = x3 = 0: (0, 0, 0). 35. Reduce the augmented matrix 2 −1 −1 5 −1 2 1 1 4 of the system: 0 1 1 40 1 1 40 1 2 0 ∼ 5 −1 2 0 ∼ 0 −6 −18 0 0 2 −1 −1 0 0 −3 −9 0 1 1 40 1010 4 1 3 0 ∼ 0 1 3 0 . ∼ 0 0 −3 −9 0 0000 3 1. P13 2. A12 (−5), A13 (−2) 3. M2 (−1/6) 4. A21 (−1), A23 (3) 0 0 0 0 0 0 . 0 0 168 It follows that x1 + x3 = 0 and x2 + 3x3 = 0. Setting x3 = t, where t is a free variable, we get x2 = −3t and x1 = −t. Thus we have that the solution set of the system is {(−t, −3t, t) : t ∈ R}. 36. Reduce the augmented matrix of the system: 0 0 0 i 1+i −i 1 1−i −1 1 + 2i 1 − i 1 1 2 i 1+i −i 0 ∼ 1 + 2i 1 − i 1 0 ∼ 1 + 2i 1 − i 1 0 2i 1 1 + 3i 0 2i 1 1 + 3i 0 2i 1 1 + 3i 0 1 1−i −1 0 4 ∼ 0 −2 − 2i 1 + 2i 0 ∼ 0 −1 − 2i 1 + 5i 0 1 1−i −1 0 6 0 1 3i ∼ 0 0 0 −5 + 8i 0 3 2. M1 (−i) 1. P12 6. P23 1 1−i −1 0 −2 − 2i 1 + 2i 0 1 3i 1 1 − i −1 7 ∼ 0 1 3i 0 0 1 3. A12 (−1 − 2i), A13 (−2i) 7. 1 M3 ( −5+8i ) 0 1 1−i −1 0 5 0 ∼ 0 0 −5 + 8i 0 0 0 1 3i 0 1000 0 8 0 ∼ 0 1 0 0 . 0 0010 4. A23 (−1) 5. A32 (2 + 2i) 8. A21 (−1 + i), A31 (1), A32 (−3i) Therefore, the unique solution to this system is x1 = x2 = x3 = 0: (0, 0, 0). 37. Reduce the augmented matrix of the system: 2 3 210 1 3 1 6 −1 2 0 ∼ 6 −1 12 640 12 6 2 1 3 3 1 ∼ 0 0 −2 1. M1 (1/3) 1 3 0 0 1 3 2 4 2 0 1 3 2 0 ∼ 0 −5 0 −2 0 0 10 4 0 ∼ 0 1 0 00 2. A12 (−6), A13 (−12) 1 3 0 0 3. M2 (−1/5) 1 3 0 0 0 0 0 0 0 . 0 4. A21 (−2/3), A23 (2) From the last augmented matrix, we have x1 + 1 x3 = 0 and x2 = 0. Since x3 is a free variable, we let x3 = t, 3 where t is a real number. It follows that the solution set for the given system is given by {(t, 0, −3t) : t ∈ R}. 38. Reduce the augmented matrix 2 1 −8 3 −2 −5 5 −6 −3 3 −5 1 of the system: 0 3 −2 −5 012 1 −8 ∼ 0 5 −6 −3 0 3 −5 1 1 −3 3 7 −14 30 ∼ 0 9 −18 0 4 −8 1. P12 2. A21 (−1) 0 1 −3 3 022 1 −8 ∼ 0 5 −6 −3 0 3 −5 1 0 1 −3 3 040 1 −2 ∼ 0 0 9 −18 0 0 4 −8 3. A12 (−2), A13 (−5), A14 (−3) 0 0 0 0 0 1 0 −3 0 0 5 0 1 −2 0 ∼ 0 0 0 00 0 00 00 4. M2 (1/7) . 5. A21 (3), A23 (−9), A24 (−4) 169 From the last augmented matrix we have: x1 − 3x3 = 0 and x2 − 2x3 = 0. Since x3 is a free variable, we let x3 = t, where t is a real number. It follows that x2 = 2t and x1 = 3t. Thus, the solution set for the given system is given by {(3t, 2t, t) : t ∈ R}. 39. Reduce the augmented matrix of the system: 1 1+i 1−i 0 1 1 0 ∼ 0 i 1 i 1 − 2i −1 + i 1 − 3i 0 0 1 1+i 1−i 3 −2−i 1 ∼ 0 5 0 0 0 1. A12 (−i), A13 (−1 + 2i) 1+i 2−i −4 + 2i 0 4 0 ∼ 0 2. A23 (2) 1−i 0 1 1+i 1−i 0 2 −1 0 ∼ 0 2 − i −1 0 2 0 0 0 0 0 1 0 6−2i 0 5 0 1 −2−i 0 . 5 0 00 0 1 3. M2 ( 2−i ) 4. A21 (−1 − i) From the last augmented matrix we see that x3 is a free variable. We set x3 = 5s, where s ∈ C. Then x1 = 2(i − 3)s and x2 = (2 + i)s. Thus, the solution set of the system is {(2(i − 3)s, (2 + i)s, 5s) : s ∈ C}. 40. Reduce the augmented matrix of the system: 1 −1 10 1 −1 10 0 3 2 010 3 20 ∼ 3 0 −1 0 0 3 −4 0 5 1 −1 0 0 6 −6 0 1 0 5/3 1 0 5/3 0 2/3 0 4 0 1 2/3 30 1 ∼ 0 0 −6 0 ∼ 0 0 1 0 0 −10 0 0 0 −10 1. A13 (−3), A14 (−5) 4. M3 (−1/6) 1 20 ∼ 0 0 0 05 ∼ 0 0 −1 1 3 6 1 0 0 0 0 1 0 0 1 2/3 −4 −6 0 0 1 0 0 0 0 0 0 0 . 0 0 3. A21 (1), A23 (−3), A24 (−6) 2. M2 (1/3) 5. A31 (−5/3), A32 (−2/3), A34 (10) Therefore, the unique solution to this system is x1 = x2 = x3 = 0: (0, 0, 0). 41. Reduce the augmented matrix of the system: 2 −4 60 1 −2 3 −6 9 013 −6 ∼ 1 −2 3 0 2 −4 5 −10 15 0 5 −10 1. M1 (1/2) 3 9 6 15 0 1 −2 3 0 020 0 0 0 ∼ . 0 0 0 0 0 0 0 000 2. A12 (−3), A13 (−2), A14 (−5) From the last matrix we have that x1 − 2x3 + 3x3 = 0. Since x2 and x3 are free variables, let x2 = s and let x3 = t, where s and t are real numbers. The solution set of the given system is therefore {(2s − 3t, s, t) : s, t ∈ R}. 42. Reduce the augmented matrix of the system: 4 −2 −1 −1 0 1 −3 1 −4 0 1 −3 1 −4 0 1 2 3 1 −2 3 0 ∼ 3 1 −2 3 0 ∼ 0 10 −5 15 0 5 −1 −2 10 5 −1 −2 10 0 14 −7 21 0 170 1 −3 1 −4 0 1 −3 1 −4 0 1 −3 1 4 5 2 −1 3 0 ∼ 0 2 −1 3 0 ∼ 0 1 −1/2 ∼ 0 0 2 −1 30 0 0 0 00 0 0 0 −4 0 3/2 0 . 0 0 3 1. A21 (−1) 2. A12 (−3), A13 (−5) 4. A23 (−1) 3. M2 (1/5), M3 (1/7) 5. M2 (1/2) 1 From the last augmented matrix above we have that x2 − 2 x3 + 3 x4 = 0 and x1 − 3x2 + x3 − 4x4 = 0. Since x3 2 and x4 are free variables, we can set x3 = 2s and x4 = 2t, where s and t are real numbers. Then x2 = s − 3t and x1 = s − t. It follows that the solution set of the given system is {(s − t, s − 3t, 2s, 2t) : s, t ∈ R}. 43. Reduce the augmented matrix of the system: 2 1 −1 10 1 1 1 1 1 −1 0 1 2 1 ∼ 3 −1 1 −2 0 3 −1 4 2 −1 10 4 2 1 1 1 −1 0 10 1 3 −3 0 4 0 1 30 ∼ ∼ 0 −4 −2 1 0 0 0 0 −2 −5 50 00 1 0 −2 20 100 0 3 −3 0 7 0 1 0 0 60 1 ∼ ∼ 0 0 1 −1 0 0 0 1 −1 0 0 10 −11 0 0 0 0 −1 1 −1 0 1 1 1 −1 −1 1 0 2 0 −1 −3 3 ∼ 1 −2 0 0 −4 −2 1 −1 10 0 −2 −5 5 −2 20 1 0 −2 2 3 −3 0 5 0 1 3 −3 ∼ 10 −11 0 0 0 −3 3 −3 30 0 0 10 −11 0 100 00 10 080 1 0 0 090 1 ∼ ∼ 0 0 0 1 −1 0 0 0 000 10 00 0 1. P12 2. A12 (−2), A13 (−3), A14 (−4) 5. P34 6. M3 (−1/3) 3. M2 (−1) 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 . 0 0 4. A21 (−1), A23 (4), A24 (2) 7. A31 (2), A32 (−3), A34 (−10) 8. M4 (−1) 9. A43 (1) From the last augmented matrix, it follows that the solution set to the system is given by {(0, 0, 0, 0)}. 44. The equation Ax = 0 is 2 −1 3 4 x1 x2 0 0 = . Reduce the augmented matrix of the system: 2 −1 0 3 40 1 ∼ 1 −1 2 3 4 1. M1 (1/2) 0 0 2 ∼ 1 −1 2 0 11 2 2. A12 (−3) 0 0 1 1 −2 0 1 3 ∼ 3. M2 (2/11) 0 0 4 ∼ 1 0 00 10 4. A21 (1/2) From the last augmented matrix, we see that x1 = x2 = 0. Hence, the solution set is {(0, 0)}. 45. The equation Ax = 0 is 1 − i 2i 1 + i −2 x1 x2 = 0 0 1 −1 + i 0 0 1+i −2 ∼ . Reduce the augmented matrix of the system: 1 − i 2i 0 1 + i −2 0 1 ∼ 2 1 −1 + i 0 0 0 0 . . 171 1. M1 ( 1+i ) 2 2. A12 (−1 − i) It follows that x1 + (−1 + i)x2 = 0. Since x2 is a free variable, we can let x2 = t, where t is a complex number. The solution set to the system is then given by {(t(1 − i), t) : t ∈ C}. 46. The equation Ax = 0 is 1 + i 1 − 2i −1 + i 2 + i x1 x2 = 0 0 . Reduce the augmented matrix of the system: 1 + i 1 − 2i 0 −1 + i 2 + i 0 1 ∼ 1 − 1+3i 2 −1 + i 2 + i 1. M1 ( 1−i ) 2 0 0 1 − 1+3i 2 0 0 2 ∼ 0 0 . 2. A12 (1 − i) It follows that x1 − 1+3i x2 = 0. Since x2 is a free variable, we can let x2 = r, where r is any complex 2 number. Thus, the solution set to the given system is {( 1+3i r, r) : r ∈ C}. 2 47. The equation Ax = 0 is x1 0 1 23 2 −1 0 x2 = 0 . 0 1 11 x3 Reduce the augmented 1 23 2 −1 0 1 11 matrix of 0 1 0 ∼ 0 10 4 ∼ 0 1 00 1. A12 (−2), A13 (−1) the system: 1 2 30 1 2 0 −5 −6 0 ∼ 0 0 −1 −2 0 0 1 0 −1 −1 0 5 2 0 ∼ 0 1 2 40 00 1 2. P23 3. M2 (−1) 2 3 −1 −2 −5 −6 0 6 0 ∼ 0 0 1 2 30 3 0 ∼ 0 1 2 0 0 0 −5 −6 0 1000 0 1 0 0 . 0010 4. A21 (−2), A23 (5) 5. M3 (1/4) 6. A31 (1), A32 (−2) From the last augmented matrix, we see that the only solution to the given system is x1 = x2 = x3 = 0: {(0, 0, 0)}. 48. The equation Ax = 0 is x 11 1 −1 1 x −1 0 −1 2 2 x3 13 2 2 x4 Reduce the augmented 11 1 −1 −1 0 −1 2 13 2 2 matrix of the 1 0 1 0 ∼ 0 0 0 0 0 = . 0 0 system: 1 1 −1 0 1 0 1 −2 0 1 2 3 10 1 0 ∼ 0 1 0 1 0 ∼ 0 21 30 001 10 0 1. A12 (1), A13 (−1) 2. A21 (−1), A23 (−2) 3. A31 (−1) 0 1 0 0 −3 0 0 1 0 . 1 10 172 From the last augmented matrix, we see that x4 is a free variable. We set x4 = t, where t is a real number. The last row of the reduced row echelon form above corresponds to the equation x3 + x4 = 0. Therefore, x3 = −t. The second row corresponds to the equation x2 + x4 = 0, so we likewise ﬁnd that x2 = −t. Finally, from the ﬁrst equation we have x1 − 3x4 = 0, so that x1 = 3t. Consequently, the solution set of the original system is given by {(3t, −t, −t, t) : t ∈ R}. 49. The equation Ax = 0 is 2 − 3i 1 + i i−1 x1 0 3 + 2i −1 + i −1 − i x2 = 0 . 5−i 2i −2 x3 0 Reduce the augmented matrix of this system: −1+5i −5−i 2 − 3i 1 + i i−1 0 1 0 1 13 13 1 2 3 + 2i −1 + i −1 − i 0 ∼ 3 + 2i −1 + i −1 − i 0 ∼ 0 5−i 2i −2 0 5−i 2i −2 0 0 1. M1 ( 2+3i ) 13 −1+5i 13 −5−i 13 0 0 0 0 0 0 . 0 2. A12 (−3 − 2i), A13 (−5 + i) 5− From the last augmented matrix, we see that x1 + −1+5i x2 + −13 i x3 = 0. Since x2 and x3 are free variables, 13 we can let x2 = 13r and x3 = 13s, where r and s are complex numbers. It follows that the solution set of the system is {(r(1 − 5i) + s(5 + i), 13r, 13s) : r, s ∈ C}. 50. The equation Ax = 0 is x1 1 30 0 −2 −3 0 x2 = 0 . 1 40 x3 0 Reduce the augmented matrix of the system: 1 300 1300 1300 1 1 2 3 −2 −3 0 0 ∼ 0 3 0 0 ∼ 0 1 0 0 ∼ 0 1 400 0100 0300 0 1. A12 (2), A13 (−1) 2. P23 0 1 0 00 0 0 . 00 3. A21 (−3), A23 (−3) From the last augmented matrix we see that the solution set of the system is {(0, 0, t) : t ∈ R}. 51. The equation Ax = 0 is 1 0 3 3 −1 7 2 1 8 1 1 5 −1 1 −1 Reduce the augmented matrix of the system: 1 0 3 1 0 30 3 −1 7 0 0 −1 −2 1 2 1 8 0 ∼ 0 1 2 1 1 5 0 0 1 2 −1 1 −1 0 0 1 2 x1 0 x2 = 0 . x3 0 0 0 0 0 0 2 ∼ 1 0 0 0 0 0 1 1 1 1 3 2 2 2 2 0 0 0 0 0 3 ∼ 1 0 0 0 0 0 1 0 0 0 3 2 0 0 0 0 0 0 0 0 . 173 1. A12 (−3), A13 (−2), A14 (−1), A15 (1) 2. M2 (−1) 3. A23 (−1), A24 (−1), A25 (−1) From the last augmented matrix, we obtain the equations x1 + 3x3 = 0 and x2 + 2x3 = 0. Since x3 is a free variable, we let x3 = t, where t is a real number. The solution set for the given system is then given by {(−3t, −2t, t) : t ∈ R}. 52. The equation Ax = 0 is x 1 −1 0 1 1 3 −2 0 5 x2 x3 −1 201 x4 Reduce the augmented matrix of 1 −1 0 3 −2 0 −1 20 0 = 0 . 0 the system: 1 −1 0 1 0 1 10 1 2 5 0 ∼ 0 1 0 2 0 ∼ 0 10 0 1020 0 1. A12 (−3), A13 (1) 0 1 0 0 0 0 30 2 0 . 00 2. A21 (1), A23 (−1) From the last augmented matrix we obtain the equations x1 + 3x4 = 0 and x2 + 2x4 = 0. Because x3 and x4 are free, we let x3 = t and x4 = s, where s and t are real numbers. It follows that the solution set of the system is {(−3s, −2s, t, s) : s, t ∈ R}. 53. The equation Ax = 0 is 1 3 −2 x1 0 −3 0 0 x2 = 0 . 0 −9 0 x3 0 60 0 x4 Reduce the augmented matrix of the system: 1 0 −3 0 0 1 0 −3 0 0 1 3 0 −9 0 0 ∼ 0 0 0 0 0 . −2 0 600 00 000 1. A12 (−3), A13 (2) From the last augmented matrix we obtain x1 − 3x3 = 0. Therefore, x2 , x3 , and x4 are free variables, so we let x2 = r, x3 = s, and x4 = t, where r, s, t are real numbers. The solution set of the given system is therefore {(3s, r, s, t) : r, s, t ∈ R}. 54. The equation Ax = 0 is 2+i i 3 − 2i x1 0 i 1 − i 4 + 3i x2 = 0 . 3 − i 1 + i 1 + 5i x3 0 Reduce the augmented matrix 2+i i 3 − 2i i 1 − i 4 + 3i 3 − i 1 + i 1 + 5i of the system: 0 i 1 − i 4 + 3i 0 1 −1 − i 3 − 3i 0 1 2 0 ∼ 2+i i 3 − 2i 0 ∼ 2 + i i 3 − 2i 0 0 3 − i 1 + i 1 + 5i 0 3 − i 1 + i 1 + 5i 0 174 1 −1 − i 3 − 4i 0 1 −1 − i 3 − 4i 4 5+31i 1 ∼ 0 1 + 4i −7 + 3i 0 ∼ 0 17 0 5 + 3i −4 + 20i 0 0 5 + 3i −4 + 20i 1 0 25−32i 0 100 17 6 7 ∼ 0 1 5+31i 0 ∼ 0 1 0 17 001 0 00 1 3 1. P12 2. M1 (−i) 3. A12 (−2 − i), A13 (−3 + i) 6. M3 (−i/10) 7. 0 1 5 0 ∼ 0 0 0 0 0 . 0 4. M2 ( 1−4i ) 17 A31 ( −25+32i ), 17 25−32i 17 5+31i 17 0 1 0 10i 0 0 0 5. A21 (1 + i), A23 (−5 − 3i) − A32 ( −51731i ) From the last augmented matrix above, we see that the only solution to this system is the trivial solution. Solutions to Section 2.6 True-False Review: 1. FALSE. An invertible matrix is also known as a nonsingular matrix. 11 22 2. FALSE. For instance, the matrix does not contain a row of zeros, but fails to be invertible. 3. TRUE. If A is invertible, then the unique solution to Ax = b is x = A−1 b. 10 100 4. FALSE. For instance, if A = and B = 0 0 , then AB = I2 , but A is not even a square 001 01 matrix, hence certainly not invertible. 5. FALSE. For instance, if A = In and B = −In , then A and B are both invertible, but A + B = 0n is not invertible. 6. TRUE. We have (AB )B −1 A−1 = In and B −1 A−1 (AB ) = In , and therefore, AB is invertible, with inverse B −1 A−1 . 7. TRUE. From A2 = A, we subtract to obtain A(A − I ) = 0. Left multiplying both sides of this equation by A−1 (since A is invertible, A−1 exists), we have A − I = A−1 0 = 0. Therefore, A = I , the identity matrix. 8. TRUE. From AB = AC , we left-multiply both sides by A−1 (since A is invertible, A−1 exists) to obtain A−1 AB = A−1 AC . Since A−1 A = I , we obtain IB = IC , or B = C . 9. TRUE. Any 5 × 5 invertible matrix must have rank 5, not rank 4 (Theorem 2.6.5). 10. TRUE. Any 6 × 6 matrix of rank 6 is invertible (Theorem 2.6.5). Problems: 1. We have AA−1 = 2 −1 3 −1 −1 −3 1 2 = (2)(−1) + (−1)(−3) (2)(1) + (−1)(2) (3)(−1) + (−1)(−3) (3)(1) + (−1)(2) = 10 01 = I2 . 2. We have AA−1 = 4 3 9 7 7 −9 −3 4 = (4)(7) + (9)(−3) (4)(−9) + (9)(4) (3)(7) + (7)(−3) (3)(−9) + (7)(4) = 10 01 = I2 . 175 3. We have AA−1 351 8 −29 3 19 −2 = 1 2 1 −5 267 2 −8 1 (3)(8) + (5)(−5) + (1)(2) (3)(−29) + (5)(19) + (1)(−8) (3)(3) + (5)(−2) + (1)(1) = (1)(8) + (2)(−5) + (1)(2) (1)(−29) + (2)(19) + (1)(−8) (1)(3) + (2)(−2) + (1)(1) (2)(8) + (6)(−5) + (7)(2) (2)(−29) + (6)(19) + (7)(−8) (2)(3) + (6)(−2) + (7)(1) 100 = 0 1 0 = I3 . 001 4. We have [A|I2 ] = 1 1 21 30 0 1 1 0 1 ∼ 2 1 1 −1 0 1 10 3 −2 0 1 −1 1 2 ∼ = [I2 |A−1 ]. Therefore, 3 −2 −1 1 A−1 = 1. A12 (−1) . 2. A21 (−2) 5. We have [A|I2 ] = 1 1+i 1 1−i 1 0 0 1 3 ∼ 1 0 1 ∼ 1 0 1+i −1 −1 + i 1 1 0 −1 1 + i 0 1 1 − i −1 2 ∼ 1 0 1 0 1+i 1 1 − i −1 = [I2 |A−1 ]. Thus, −1 1 + i 1 − i −1 A−1 = 1. A12 (−1 + i) 2. M2 (−1) . 3. A21 (−1 − i) 6. We have [A|I2 ] = 1 −i 1 i−1 20 3 ∼ 0 1 1 ∼ −i 1 0 1−i 1−i 1 1 0 1 0 1+i 01 1 −1+i 2 1+i 2 2 ∼ 1 −i 1 0 11 = [I2 |A−1 ]. Thus, A−1 = 1. A12 (1 − i) 1+i 1 −1+i 2 1+i 2 2. M2 (1/(1 − i)) . 3. A21 (i) 0 1+i 2 176 7. Note that AB = 02 for all 2 × 2 matrices B . Therefore, A is not invertible. 8. We have 1 −1 2 1 11 [A|I3 ] = 2 4 −3 10 104 3 ∼ 0 1 2 001 1 0 0 0 1 −1 1 0 ∼ 0 3 1 0 1 −3 0 1 1 4 −4 0 1 ∼ 0 0 10 1 −3 0 1 0 0 1 −1 2 100 2 0 ∼ 0 1 2 −4 0 1 1 0 3 7 −2 1 0 0 −43 −4 13 7 = [I3 |A−1 ]. 0 −24 −2 1 10 1 −3 2 1 7 −2 2 −4 0 1 0 0 1 0 Thus, A−1 1. A12 (−2), A13 (−4) −43 −4 13 7 . = −24 −2 10 1 −3 3. A21 (1), A23 (−3) 2. P23 4. A31 (−4), A32 (−2) 9. We have 35110 [A|I3 ] = 1 2 1 0 1 26700 121 0 10 3 4 3 0 ∼ ∼ 0 1 2 −1 025 0 −2 1 0 1 1 0 ∼ 3 1 2 10 10 1 2 10 2 0 0 ∼ 0 −1 −2 1 −3 0 01 0 2 5 0 −2 1 1 0 −3 2 −5 0 100 8 −29 3 5 01 2 −1 3 0 ∼ 0 1 0 −5 19 −2 = [I3 |A−1 ]. 00 1 2 −8 1 001 2 −8 1 210 511 670 Thus, A−1 1. P12 2. A12 (−3), A13 (−2) 8 −29 3 19 −2 . = −5 2 −8 1 3. M2 (−1) 4. A21 (−2), A23 (−2) 5. A31 (3), A32 (−2) 10. This matrix is not invertible, because the column of zeros guarantees that the rank of the matrix is less than three. 11. We have 42 [A|I3 ] = 2 1 32 1 1 11 3 ∼ 0 −1 −29 0 −2 −57 0 32 4 1 0 ∼ 2 1 −7 1 4 2 −13 0 −1 1 1 1 11 0 4 0 3 −2 ∼ 0 1 29 0 1 4 −4 0 −2 −57 1 1 0 18 −34 −1 6 ∼ 0 1 0 −29 55 00 1 1 −2 −13 1 −7 0 40 0 1 0 0 0 1 1 1 2 0 ∼ 2 0 4 −1 1 1 5 −3 2 ∼ 0 4 −4 0 0 1 0 2 = [I3 |A−1 ]. 0 1 11 0 −1 1 1 −7 0 1 0 2 −13 1 00 0 −18 0 2 −1 1 29 0 −3 2 0 1 1 −2 0 177 Thus, A−1 18 −34 −1 55 2 . = −29 1 −2 0 2. A21 (−1) 1. P13 3. A12 (−2), A13 (−4) 5. A21 (−1), A23 (2) 4. M2 (−1) 6. A31 (18), A32 (−29) 12. We have 1 2 −3 [A|I3 ] = 2 6 −2 −1 1 4 1 3 ∼ 0 0 0 1 1 0 ∼ 0 1 0 1 2 −3 2 4 −2 3 1 1 0 −7 3 −1 0 10 4 1 0 ∼ 0 1 1 2 −1 2 0 −5 00 4 −3 1 2 13 11 7 1 0 0 −5 −5 10 5 3 1 2 − 10 ∼ 0 1 0 5 5 4 3 1 0 0 1 −5 −5 10 1 0 0 0 1 0 1 1 2 −3 0 2 0 ∼ 0 1 2 −1 1 1 03 1 −7 3 −1 0 1 0 2 −1 2 4 3 1 1 − 5 10 − 5 0 1 0 0 1 2 0 0 0 1 = [I3 |A−1 ]. Thus, A−1 = 2. M2 ( 1 ) 2 1. A12 (−2), A13 (1) 13. We have 1 i 2 [A|I3 ] = 1 + i −1 2i 2 2i 5 10 3 ∼ 0 1 00 − 13 5 3 5 4 −5 11 10 1 − 10 3 10 2 5 −1 5 . 1 4. M3 (− 5 ) 3. A21 (−2), A23 (−3) 0 1 i 2 1 0 ∼ 0 −i −2 1 0 0 1 0 −i 1 0 1 4 −2i 1 − i i 0 ∼ 0 1 −2 0 1 0 1 0 0 −7 5 0 1 0 5. A31 (7), A32 (−2) 1 00 1i 2 1 00 2 −1 − i 1 0 ∼ 0 1 −2i 1 − i i 0 −2 01 00 1 −2 0 1 00 −i 10 1 0 1 − 5i i 2i = [I3 |A−1 ]. 01 −2 01 Thus, A−1 −i 10 = 1 − 5i i 2i . −2 01 1. A12 (−1 − i), A13 (−2) 2. M2 (i) 3. A21 (−i) 4. A32 (2i) 14. We have 2 131 [A|I3 ] = 1 −1 2 0 3 340 0 1 0 0 1 −1 2 0 1 0 ∼ 2 131 1 3 340 1 0 0 0 1 −1 20 10 2 0 ∼ 0 3 −1 1 −2 0 1 0 6 −2 0 −3 1 178 0 10 1 −1 2 3 −1 1 −2 0 ∼ 0 0 0 0 −2 11 3 Since 2 = rank(A) < rank(A# ) = 3, we know that A−1 does not exist (we have obtained a row of zeros in the block matrix on the left. 2. A12 (−2), A13 (−3) 1. P12 3. A23 (−2) 15. We have 1 −1 2 31 2 0 3 −4 0 [A|I4 ] = 3 −1 7 80 1 03 50 0 1 0 0 0 0 1 0 1 −1 2 3 0 1000 010 2 −1 −10 −2 1 0 0 ∼ 0 0 2 1 −1 −3 0 1 0 1 0 1 1 2 −1 0 0 1 1 1000 1 −1 2 3 1 1 2 −1 0 0 1 3 0 20 ∼ ∼ 0 2 1 −1 −3 0 1 0 0 0 2 −1 −10 −2 1 0 0 0 1 10 3 5 00 0 1 0 1 1 2 −1 0 0 150 4 ∼ ∼ 0 0 1 5 1 0 −1 2 0 0 0 −3 −14 01 0 −2 0 1000 27 10 −27 7 3 −8 60 1 0 0 ∼ 0 0 1 0 −14 −5 14 0001 3 1 −3 0 3 5 0 1 1 2 −1 0 −1 −5 −1 0 −3 −14 0 0 0 0 1 0 0 −10 −3 0 1 0 −3 −2 0 01 5 10 00 1 31 35 11 = [I4 |A−1 ]. −18 4 0 1 0 1 1 −2 0 −2 3 −5 1 −1 −1 2 −3 4 Thus, 27 10 −27 35 7 3 −8 11 . = −14 −5 14 −18 3 1 −3 4 A−1 1. A12 (−2), A13 (−3), A14 (−1) 4. M3 (−1) 2. P13 5. A31 (−3), A32 (−1), A34 (3) 3. A21 (1), A23 (−2), A24 (−2) 6. A41 (10), A42 (3), A43 (5) 16. We have 0 −2 −1 −3 2 0 2 1 [A|I4 ] = 1 −2 0 2 3 −1 −2 0 1 −2 0 2 4 2 −3 20 ∼ 0 −2 −1 −3 0 5 −2 −6 0 0 1 0 0 1 −2 0 2 012 0 2 1 ∼ 0 0 −2 −1 −3 1 3 −1 −2 0 0 0 1 0 0 1 0 0 1 −2 0 20 0 10 1 3 1 −2 0 3 0 1 −4 0 2 ∼ 0 0 0 0 −2 −1 −3 1 0 −3 1 0 5 −2 −6 0 0 0 −1 0 2 0 0 −3 1 1 0 0 0 0 1 0 0 0 0 1 0 1 4 0 0 1 0 0 0 1 0 0 0 1 179 1 40 ∼ 0 0 1 60 ∼ 0 0 1 80 ∼ 0 0 0 1 0 0 0 1 1 1 2 0 0 0 −9 2 0 1 0 0 1 2 3 −4 −9 2 −9 4 1 2 3 −4 1 2 9 −2 1 1 2 1 0 1 0 2 1 0 4 1 1 2 5 0 −4 1 2 1 4 5 18 1 2 0 0 0 1 2 0 0 0 9 1 0 −1 0 9 1 5 1 0 18 2 0 1 −2 −1 9 9 −1 9 −5 9 1 9 2 9 1 1 10 1 0 0 00 2 2 3 1 1 −1 0 5 0 1 −4 0 −1 0 2 2 4 2 ∼ 0 0 −9 −9 0 −5 −1 1 −1 0 2 4 4 2 1 9 −1 1 −1 0 00 0 −2 1 2 2 2 1 2 100 0 0 9 −9 0 0 9 1 −1 0 7 0 1 0 −1 0 1 − 5 2 9 9 9 1 5 1 2 ∼ 1 2 001 0 18 −9 −9 9 2 9 9 1 −1 0 0 0 0 − 2 1 2 −1 0 2 2 2 1000 −1 0 9 9 9 9 1 1 1 0 1 0 0 −2 0 −3 9 9 ∼ 9 9 = [I |A−1 ]. 4 2 1 1 2 0 0 1 0 −9 0 −9 9 3 2 1 2 0 0 0 0 1 −9 −9 0 9 0 Thus, 0 −2 9 A−1 = 1 9 −2 9 5. P34 6. M3 (− 2 ) 9 2 −1 9 9 1 1 0 −3 9 . 1 0 −2 3 9 1 2 −9 0 9 1 3. M2 ( 4 ) 2. A12 (−2), A14 (−3) 1. P13 2 9 7. A31 (−1), 4. A21 (2), A23 (2), A24 (−5) 1 A32 (− 2 ) 2 8. M4 (− 9 ) 9. A42 (1), A43 (− 1 ) 2 second column of A−1 without determining the whole inverse, vector 4 x 0 2 −1 2 y = 1 . The corresponding augmented matrix 5 1 3 z 0 1 −1 −1 3 0 1 −2 0 . Thus, back-substitution yields z = −1, y = −2, and 0 1 −1 1 the second column vector of A−1 is −2 . −1 17. To determine the 2 −1 1 linear system 5 1 −1 1 be row-reduced to 0 0 18. We have A = 1 2 3 5 ,b= 1 3 , and the Gauss-Jordan method yields A−1 = we 4 2 3 solve the 0 1 can 0 x = 1. Thus, −5 3 2 −1 . Therefore, we have x = A−1 b = So we have x1 = 4 and 1 19. We have A = 0 2 Therefore, we have −5 3 2 −1 1 3 = 4 −1 . x2 = −1. 1 −2 −2 7 5 −3 1 1 , b = 3 , and the Gauss-Jordan method yields A−1 = −2 −1 1 . 4 −3 1 2 2 −1 7 5 −3 −2 −2 1 3 = 2 . x = A−1 b = −2 −1 2 2 −1 1 1 Hence, we have x1 = −2, x2 = 2, and x3 = 1. 180 1 −2i 2−i 4i 20. We have A = 2 −i ,b= , and the Gauss-Jordan method yields A−1 = 1 2+8i 4i 2i −2 + i 1 . Therefore, we have x = A−1 b = 1 2 + 8i 4i 2i −2 + i 1 2 −i = 1 2 + 8i 2 + 8i −4 + i . and x2 = −4+ii . 2+8 45 1 −79 27 46 10 1 , b = 1 , and the Gauss-Jordan method yields A−1 = 12 −4 −7 . 18 1 38 −13 −22 Hence, we have x1 = 1 3 21. We have A = 2 4 Therefore, we have −79 27 46 1 −6 x = A−1 b = 12 −4 −7 1 = 1 . 38 −13 −22 1 3 Hence, we have x1 = −6, x2 = 1, and x3 = 3. 12 1 1 2 2 −1 , b = 24 , and the Gauss-Jordan method yields A−1 = 22. We have A = 1 −36 2 −1 1 Therefore, we have −1 3 5 12 −10 1 3 3 −3 24 = 18 . x = A−1 b = 12 5 −3 −1 −36 2 −1 3 5 1 3 3 −3 . 12 5 −3 −1 Hence, x1 = −10, x2 = 18, and x3 = 2. 23. We have AAT = 0 −1 1 0 0 −1 1 0 = (0)(0) + (1)(1) (−1)(0) + (0)(1) (0)(−1) + (1)(0) (−1)(−1) + (0)(0) = 1 0 0 1 = I2 , = 1 0 0 1 = I2 , so AT = A−1 . 24. We have √ 3/2 √ /2 1 3/2 √1/2 − 3/2 3/2 −1/2 1/2 √ √ √ √ ( 3/2)( 3/2) + √ /2)(1/2) (1 ( 3/2)(−1/2) + √ /2)( √ /2) (1 3 √ (−1/2)( 3/2) + ( 3/2)(1/2) (−1/2)(−1/2) + ( 3/2)( 3/2) √ T AA = = so AT = A−1 . 25. We have AAT = = so AT = A−1 . cos α − sin α sin α cos α cos α sin α − sin α cos α cos2 α + sin2 α (cos α)(− sin α) + (sin α)(cos α) (− sin α)(cos α) + (cos α)(sin α) (− sin α)2 + cos2 α = 1 0 0 1 = I2 , 181 26. We have T AA = 1 1 + 2x2 −2x 1 − 2x2 2x 1 2x 2x2 = 1 1 + 4x2 + 4x4 2x2 −2x 1 1 + 4x2 + 4x4 0 0 1 1 + 2x2 1 2x −2x 1 − 2x2 2x2 −2x 0 1 + 4x2 + 4x4 0 2x2 2x 1 0 = I3 , 0 1 + 4x2 + 4x4 so AT = A−1 . 27. For part 2, we have (B −1 A−1 )(AB ) = B −1 (A−1 A)B = B −1 In B = B −1 B = In , and for part 3, we have T (A−1 )T AT = (AA−1 )T = In = In . 28. We prove this by induction on k , with k = 1 trivial and k = 2 proven in part 2 of Theorem 2.6.9. Assuming the statement is true for a product involving k − 1 matrices, we may proceed as follows: (A1 A2 · · · Ak )−1 = ((A1 A2 · · · Ak−1 )Ak )−1 = A−1 (A1 A2 · · · Ak−1 )−1 k 1 1 = A−1 (A−−1 · · · A−1 A−1 ) = A−1 A−−1 · · · A−1 A−1 . 2 1 2 1 k k k k In the second equality, we have applied part 2 of Theorem 2.6.9 to the two matrices A1 A2 · · · Ak−1 and Ak , and in the third equality, we have assumed that the desired property is true for products of k − 1 matrices. 29. Since A is symmetric, we know that AT = A. We wish to show that (A−1 )T = A−1 . We have (A−1 )T = (AT )−1 = A−1 , which shows that A−1 is symmetric. The ﬁrst equality follows from part 3 of Theorem 2.6.9, and the second equality results from the assumption that A is symmetric. 30. Since A is skew-symmetric, we know that AT = −A. We wish to show that (A−1 )T = −A−1 . We have (A−1 )T = (AT )−1 = (−A)−1 = −(A−1 ), which shows that A−1 is skew-symmetric. The ﬁrst equality follows from part 3 of Theorem 2.6.9, and the second equality results from the assumption that A−1 is skew-symmetric. 31. We have (In − A)(In + A + A2 + A3 ) = In (In + A + A2 + A3 ) − A(In + A + A2 + A3 ) = In + A + A2 + A3 − A − A2 − A3 − A4 = In − A4 = In , where the last equality uses the assumption that A4 = 0. This calculation shows that In − A and In + A + A2 + A3 are inverses of one another. 32. We have B = BIn = B (AC ) = (BA)C = In C = C. 33. YES. Since BA = In , we know that A−1 = B (see Theorem 2.6.11). Likewise, since CA = In , A−1 = C . Since the inverse of A is unique, it must follow that B = C . 182 34. We can simply compute 1 ∆ a22 −a21 −a12 a11 a11 a21 a12 a22 1 ∆ 1 = ∆ = a22 a11 − a12 a21 −a21 a11 + a11 a21 a22 a12 − a12 a22 −a21 a12 + a11 a22 a11 a22 − a12 a21 0 0 a11 a22 − a12 a21 = 1 0 0 1 = I2 . Therefore, a11 a21 a12 a22 −1 = 1 ∆ −a12 a11 a22 −a21 . 35. Assume that A is an invertible matrix and that Axi = bi for i = 1, 2, . . . , p (where each bi is given). Use elementary row operations on the augmented matrix of the system to obtain the equivalence [A|b1 b2 b3 . . . bp ] ∼ [In |c1 c2 c3 . . . cp ]. The solutions to the system can be read from the last matrix: xi = ci for each i = 1, 2, . . . , p. 36. We have 1 −1 2 −1 1 1 10 2 0 1 ∼ 00 1 −1 2 1 −1 1 1 −1 2 1 1 4 1 2 3 ∼ 0 1 2 −1 4 −1 6 −1 0 2 5 −2 52 6 0 3 0 3 1 100 0 9 −5 3 2 −1 4 −1 ∼ 0 1 0 −1 8 −5 . 1 0 −2 2 001 0 −2 2 Hence, x1 = (0, −1, 0), x2 = (9, 8, −2), 1. A12 (−2), A13 (−1) x3 = (−5, −5, 2). 2. A21 (1), A23 (−2) 3. A31 (−3), A32 (−2) 37. (a): Let ei denote the ith column vector of the identity matrix Im , and consider the m linear systems of equations Axi = ei for i = 1, 2, . . . , m. Since rank(A) = m and each ei is a column m-vector, it follows that rank(A# ) = m = rank(A) and so each of the systems Axi = ei above has a solution (Note that if m < n, then there will be an inﬁnite number of solutions). If we let B = [x1 , x2 , . . . , xm ], then AB = A [x1 , x2 , . . . , xm ] = [Ax1 , Ax2 , . . . , Axm ] = [e1 , e2 , . . . , em ] = In . ad (b): A right inverse for A in this case is a 3 × 2 matrix b e such that cf a + 3b + c d + 3e + f 2a + 7b + 4c 2d + 7e + 4f = 10 01 . 183 Thus, we must have a + 3b + c = 1, d + 3e + f = 0, 2a + 7b + 4c = 0, 2d + 7e + 4f = 1. The ﬁrst and third equation comprise a linear system with augmented matrix 1311 2740 for a, b, and 1 131 . Setting c = t, we have b = −2 − 2t 0 1 2 −2 and a = 7 + 5t. Next, the second and fourth equation above comprise a linear system with augmented matrix 1310 1310 for d, e, and f . The row-echelon form of this augmented matrix is . Setting 2741 0121 f = s, we have e = 1 − 2s and d = −3 + 5s. Thus, right inverses of A are precisely the matrices of the form 7 + 5t −3 + 5s −2 − 2t 1 − 2s . t s c. The row-echelon form of this augmented matrix is Solutions to Section 2.7 True-False Review: 1. TRUE. Since every elementary matrix corresponds to a (reversible) elementary row operation, the reverse elementary row operation will correspond to an elementary matrix that is the inverse of the original elementary matrix. 2 0 2. FALSE. For instance, the matrices product, 2 0 0 2 0 1 and 1 0 0 2 are both elementary matrices, but their , is not. 3. FALSE. Every invertible matrix can be expressed as a product of elementary matrices. Since every elementary matrix is invertible and products of invertible matrices are invertible, any product of elementary matrices must be an invertible matrix. 4. TRUE. Performing an elementary row operation on a matrix does not alter its rank, and the matrix EA is obtained from A by performing the elementary row operation associated with the elementary matrix E . Therefore, A and EA have the same rank. 2 5. FALSE. If Pij is a permutation matrix, then Pij = In , since permuting the ith and j th rows of In twice − 2 yields In . Alternatively, we can observe that Pij = In from the fact that Pij 1 = Pij . 6. FALSE. For example, consider the elementary matrices E1 = have E1 E2 = 1 0 1 7 and E2 E1 = 1 0 7 7 1 0 1 Then we have E1 E2 = 0 0 and E2 = 1 0 1 1 . Then we . 7. FALSE. For example, 0 7 1 consider the elementary matrices E1 = 0 0 36 130 1 2 and E2 E1 = 0 1 2 . 01 001 3 1 0 0 1 0 and E2 = 0 1 0 0 1 0 0 2 . 1 8. FALSE. The only matrices we perform an LU factorization for are invertible matrices for which the reduction to upper triangular form can be accomplished without permuting rows. 184 9. FALSE. The matrix U need not be a unit upper triangular matrix. 10. FALSE. As can be seen in Example 2.7.8, a 4 × 4 matrix with LU factorization will have 6 multipliers, not 10 multipliers. Problems: 1. 0 Permutation Matrices: P12 = 1 0 k0 Scaling Matrices: M1 (k ) = 0 1 00 0 0 , 1 1 0 0 0 0 , 1 1 1 0 , P23 = 0 0 0 00 k 0 , M3 (k ) = 01 0 P13 = 0 1 1 M2 (k ) = 0 0 0 1 0 0 0 1 0 1 . 0 1 0 0 00 1 0 . 0k Row Combinations: 10 A12 (k ) = k 1 00 1k A21 (k ) = 0 1 00 0 0 , 1 0 0 , 1 2. We have 3 5 1 −2 1 ∼ 10 A13 (k ) = 0 1 k0 10 A31 (k ) = 0 1 00 1 −2 3 5 1. P12 2 ∼ 0 0 , 1 k 0 , 1 1 −2 0 11 2. A12 (−3) 100 A23 (k ) = 0 1 0 , 0k1 100 A32 (k ) = 0 1 k . 001 3 ∼ 1 −2 0 1 . 1 3. M2 ( 11 ) 1 Elementary Matrices: M2 ( 11 ), A12 (−3), P12 . 3. We have 5 1 8 2 3 −1 1 ∼ 1 3 −1 58 2 1. P12 2 ∼ 1 3 −1 0 −7 7 2. A12 (−5) 3 ∼ 1 0 3 −1 1 −1 . 3. M2 (− 1 ) 7 Elementary Matrices: M2 (− 1 ), A12 (−5), P12 . 7 4. We have 13 1 3 2 1 3 2 3 −1 4 1 32 1 2 3 4 2 1 3 ∼ 2 1 3 ∼ 0 −5 −1 ∼ 0 −5 −1 ∼ 0 1 0 0 0 1 32 3 −1 4 0 −10 −2 00 1. P13 2. A12 (−2), A13 (−3) 3. A23 (−2) 2 1 5 . 0 4. M2 (− 1 ) 5 Elementary Matrices: M2 (− 1 ), A23 (−2), A13 (−3), A12 (−2), P13 . 5 5. We have 1 2 3 2 3 4 3 4 5 4 1 2 3 4 1 2 3 4 1 1 2 3 5 ∼ 0 −1 −2 −3 ∼ 0 1 2 3 ∼ 0 6 0 −2 −4 −6 0 −2 −4 −6 0 2 1 0 3 2 0 4 3 . 0 185 1. A12 (−2), A13 (−3) 2. M2 (−1) 3. A23 (2) Elementary Matrices: A23 (2), M2 (−1), A13 (−3), A12 (−2). 6. We reduce A to the identity matrix: 1 1 2 3 1 0 1 ∼ 2 1 1. A12 (−1) 1 0 2 ∼ 0 1 . 2. A21 (−2) 1 −1 The elementary matrices corresponding to these row operations are E1 = 0 1 1 −2 0 1 and E2 = We have E2 E1 A = I2 , so that 1 1 − − A = E 1 1 E2 1 = 0 1 1 0 2 1 , − − which is the desired expression since E1 1 and E2 1 are elementary matrices. 7. We reduce A to the identity matrix: −2 −3 5 7 −2 −3 1 1 1 ∼ 1 1 −2 −3 2 ∼ 1. A12 (2) 2. P12 1 1 0 −1 3 ∼ 3. A12 (2) 4 ∼ 1 0 0 −1 1 0 5 ∼ 0 1 . 5. M2 (−1) 4. A21 (1) The elementary matrices corresponding to these row operations are E1 = 1 2 0 1 , E2 = 0 1 1 0 , E3 = 1 2 0 1 , 1 0 E4 = 1 1 , 1 0 0 −1 E5 = . We have E5 E4 E3 E2 E1 A = I2 , so 1 −2 − − − − − A = E 1 1 E2 1 E 3 1 E4 1 E5 1 = 0 1 0 1 1 0 1 −2 0 1 1 −1 0 1 1 0 0 −1 , − which is the desired expression since each Ei 1 is an elementary matrix. 8. We reduce A to the identity matrix: 3 −4 −1 2 1 ∼ −1 2 3 −4 1. P12 1 −2 3 −4 2 ∼ 2. M1 (−1) 1 −2 0 2 3 ∼ 3. A12 (−3) 4 ∼ 4. M2 ( 1 ) 2 1 −2 0 1 5 ∼ 1 0 0 1 . 5. A21 (2) The elementary matrices corresponding to these row operations are E1 = 0 1 1 0 , E2 = −1 0 0 1 , E3 = 1 −3 0 1 −1 0 0 1 , E4 = 1 0 0 1 2 , E5 = 1 0 1 −2 0 1 , We have E5 E4 E3 E2 E1 A = I2 , so − − − − − A = E1 1 E2 1 E 3 1 E4 1 E 5 1 = 0 1 1 0 1 3 0 1 1 0 0 2 2 1 . . 186 − which is the desired expression since each Ei 1 is an elementary matrix. 9. We reduce A to the identity matrix: 4 −5 1 4 1 4 4 −5 1 ∼ 1. P12 1 4 0 −21 2 ∼ 1 0 3 ∼ 4 1 1 3. M2 (− 21 ) 2. A12 (−4) 4 ∼ 1 0 0 1 . 4. A21 (−4) The elementary matrices corresponding to these row operations are E1 = 0 1 1 0 , E2 = 1 −4 0 1 , E3 = 1 0 1 0 − 21 , 1 −4 0 1 E4 = . We have E4 E3 E2 E1 A = I2 , so 0 1 − − − − A = E1 1 E2 1 E 3 1 E4 1 = 1 0 1 4 0 1 1 0 0 −21 1 0 4 1 , − which is the desired expression since each Ei 1 is an elementary matrix. 10. We reduce A to the identity matrix: 1 −1 0 1 −1 0 1 −1 0 1 −1 0 1 2 3 2 2 2 ∼ 0 4 2 ∼ 0 4 2 ∼ 0 4 2 3 13 3 13 0 43 0 01 1 −1 1 ∼ 0 0 0 4 1. A12 (−2) 2. A13 (−3) 1 1 −1 0 6 1 1 0 ∼ 0 ∼ 0 2 0 0 01 1 0 3. A23 (−1) 1 4. M2 ( 4 ) 0 0 . 1 0 1 0 5 5. A32 (− 1 ) 2 6. A21 (1) The elementary matrices corresponding to these row operations are 100 100 1 00 1 0 , E1 = −2 1 0 , E2 = 0 1 0 , E3 = 0 −3 0 1 0 −1 1 001 1 E4 = 0 0 0 1 4 0 0 0 , 1 10 0 E5 = 0 1 − 1 , 2 00 1 1 E6 = 0 0 1 1 0 0 0 . 1 We have E6 E5 E4 E3 E2 E1 A = I3 , so − − − − − − A = E1 1 E 2 1 E3 1 E4 1 E 5 1 E6 1 100 100 1 = 2 1 0 0 1 0 0 001 301 0 0 1 1 0 1 0 0 1 0 0 4 0 10 0 0 0 1 1 00 − which is the desired expression since each Ei 1 is an elementary matrix. 1 −1 0 1 0 1 0 , 2 0 01 1 0 187 11. We reduce A to the identity matrix: 0 −4 −2 1 1 1 −1 3 ∼ 0 −2 2 2 −2 1 −1 3 1 4 5 ∼ 0 −4 0 ∼ 0 0 01 0 1. P12 2. A13 (2) −1 3 2 −4 −2 ∼ 2 2 −1 0 6 −4 0 ∼ 01 1 3. M3 ( 8 ) The elementary matrices corresponding to 1 010 E1 = 1 0 0 , E 2 = 0 2 001 1 0 −3 0 , E5 = 0 1 00 1 4. A32 (2) 1 −1 3 1 −1 3 3 0 −4 −2 ∼ 0 −4 −2 0 0 8 0 0 1 1 −1 0 100 7 0 1 0 ∼ 0 1 0 . 0 01 001 6. M2 (− 1 ) 4 5. A31 (−3) 7. A21 (1) these row operations are 100 00 10 1 0 , E3 = 0 1 0 , E4 = 0 1 01 00 001 8 1 00 110 1 E6 = 0 − 4 0 , E7 = 0 1 0 . 001 0 01 0 2 , 1 We have E7 E6 E5 E4 E3 E2 E1 A = I3 , so − − − − − − − A = E1 1 E2 1 E 3 1 E4 1 E 5 1 E6 1 E7 1 1 100 010 = 1 0 0 0 1 0 0 0 −2 0 1 001 1 0 0 0 0 8 0 1 0 1 0 0 1 −2 0 0 0 1 0 1 0 1 −1 0 1 00 3 1 0 , 0 0 −4 0 0 0 01 0 01 1 − which is the desired expression since each Ei 1 is an elementary matrix. 12. We reduce A to the 1 0 3 1. M2 ( 1 ) 8 identity matrix: 23 12 1 8 0 ∼ 0 1 45 34 10 3 4 0 ∼ 0 1 0 0 −4 2. A13 (−3) 3 2 0 ∼ 5 1 5 ∼ 0 0 3. A21 (−2) 1 2 3 3 0 1 0 ∼ 0 −2 −4 03 10 6 1 0 ∼ 0 1 01 00 4. A23 (2) 1 0 3 0 1 0 0 −2 −4 0 0 . 1 5. M3 (− 1 ) 4 The elementary matrices corresponding to these row operations are 100 100 1 E1 = 0 1 0 , E 2 = 0 1 0 , E 3 = 0 8 −3 0 1 0 001 10 0 100 1 0 , E6 = 0 E4 = 0 1 0 , E 5 = 0 1 1 021 0 0 0 −4 6. A31 (−3) −2 0 1 0 , 01 0 −3 1 0 . 0 1 188 We have E6 E5 E4 E3 E2 E1 A = I3 , so − − − − − − A = E1 1 E 2 1 E3 1 E4 1 E 5 1 E6 1 100 100 1 = 0 8 0 0 1 0 0 001 301 0 0 1 00 1 0 0 1 0 0 1 0 −2 1 0 2 1 0 0 0 1 1 0 0 0 −4 0 0 1 0 3 0 , 1 − which is the desired expression since each Ei 1 is an elementary matrix. 13. We reduce A to the identity matrix: 2 −1 1 3 1 3 2 −1 1 ∼ 1 3 0 −7 2 ∼ 3. M2 (− 1 ) 7 2. A12 (−2) 1. P12 1 0 3 ∼ 3 1 1 0 4 ∼ 0 1 . 4. A21 (−3) The elementary matrices corresponding to these row operations are E1 = 0 1 1 0 , 1 −2 E2 = 0 1 , 1 0 1 0 −7 E3 = , 1 −3 0 1 E4 = . Direct multiplication veriﬁes that E4 E3 E2 E1 A = I2 . 14. We have 3 −2 −1 5 3 −2 0 13 3 1 ∼ = U. 1. A12 ( 1 ) 3 10 −1 1 3 − Hence, E1 = A12 ( 1 ). Then Equation (2.7.3) reads L = E1 1 = A12 (− 1 ) = 3 3 . Verifying Equation (2.7.2): LU = 3 1 3 −2 0 13 3 10 −1 1 3 2 3 0 − 13 2 3 −2 −1 5 = 15. We have 2 5 1 ∼ = U =⇒ m21 = = A. 5 =⇒ L = 2 1 0 1 5 2 . Then 1 LU = 0 1 5 2 2 3 0 − 13 2 2 5 = 3 1 = A. 1. A12 (− 5 ) 2 16. We have 3 5 1 2 1 ∼ 3 0 1 = U =⇒ m21 = 1 3 5 =⇒ L = 3 Then LU = 1 5 3 0 1 3 0 1 1 3 = 3 5 1 2 = A. 1 5 3 0 1 . 189 5 1. A12 (− 3 ) 17. We have 3 −1 2 3 −1 2 3 −1 2 1 2 6 −1 1 ∼ 0 1 −3 ∼ 0 1 −3 = U =⇒ m21 = 2, m31 = −1, m32 = 4. −3 52 0 4 4 0 0 16 Hence, 1 L= 2 −1 0 1 4 0 0 1 and 1 LU = 2 −1 0 1 4 0 3 −1 2 3 −1 2 0 0 1 −3 = 6 −1 1 = A. 1 0 0 16 −3 52 1. A12 (−2), A13 (1) 18. We have 5 2 1 5 2 1 52 1 2 −10 −2 3 ∼ 0 2 5 ∼ 0 2 15 2 −3 0 −4 −6 00 2. A23 (−4) 1 5 = U =⇒ m21 = −2, m31 = 3, m32 = −2. 4 Hence, 1 00 1 0 L = −2 3 −2 1 and 1 00 52 1 0 0 2 LU = −2 3 −2 1 00 1. A12 (2), A13 (−3) 19. We have 1 −1 2 3 2 0 3 −4 1 ∼ 3 −1 7 8 1 34 5 1 5 2 1 5 = −10 −2 3 = A. 4 15 2 −3 2. A23 (2) 1 −1 2 3 1 −1 2 3 2 −1 −10 0 2 −1 −10 2 0 ∼ 0 2 9 0 2 1 −1 0 0 0 4 22 0 4 2 2 1. A12 (−2), A13 (−3), A14 (−1) 1 −1 2 3 30 2 −1 −10 = U. ∼ 0 0 2 9 0 0 0 4 2. A23 (−1), A24 (−2) 3. A34 (−2) Hence, m21 = 2, Hence, 1 2 L= 3 1 0 1 1 2 0 0 1 2 0 0 0 1 20. We have 2 −3 1 2 4 −1 1 1 −8 2 2 −5 6 15 2 m31 = 3, m41 = 1, m32 = 1, m42 = 2, m43 = 2. and 1 2 LU = 3 1 0 1 1 2 0 0 1 2 0 1 −1 2 3 0 0 2 −1 −10 0 0 0 2 9 1 0 0 0 4 1 −1 2 3 2 0 3 −4 = = A. 3 −1 7 8 1 34 5 2 −3 1 2 2 −3 1 2 10 5 −1 −3 2 0 5 −1 −3 3 ∼ ∼ ∼ 0 −10 0 6 3 0 4 −3 0 10 2 −4 0 0 4 2 2 −3 1 2 0 5 −1 −3 = U. 0 0 4 −3 0 0 0 5 190 1. A12 (−2), A13 (4), A14 (−3) 2. A23 (2), A24 (−2) 3. A34 (−1) Hence, m31 = −4, m21 = 2, m41 = 3, Hence, 1 0 1 000 2 2 1 1 0 0 L= −4 −2 1 0 and LU = −4 −2 3 2 3 211 m32 = −2, m42 = 2, m43 = 1. 2 −3 1 2 2 −3 1 2 00 5 −1 −3 4 −1 1 1 0 0 0 = 0 4 −3 −8 2 2 −5 1 0 0 0 0 0 5 6 15 2 11 = A. 21. We have 1 2 2 3 1 ∼ 1 2 0 −1 = U =⇒ m21 = 2 =⇒ L = 10 21 . 1. A12 (−2) We now solve the triangular systems Ly = b and U x = y. From Ly = b, we obtain y = U x = y yields x = −11 7 3 −7 . Then . 22. We have 1 −3 5 1 −3 5 1 −3 5 1 2 3 2 2 ∼ 0 11 −13 ∼ 0 11 −13 = U =⇒ m21 = 3, m31 = 2, m32 = 1. 2 52 0 11 −8 0 0 5 1. A12 (−3), A13 (−2) 1 Hence, L = 3 2 1 obtain y = 2 −5 2. A23 (−1) 00 1 0 . We now solve the triangular systems Ly = b and U x = y. From Ly = b, we 1 1 3 . Then U x = y yields x = −1 . −1 23. We have 22 1 2 2 1 2 2 1 1 2 6 3 −1 ∼ 0 −3 −4 ∼ 0 −3 −4 = U =⇒ m21 = 3, m31 = −2, m32 = −2. −4 2 2 0 0 −4 0 0 −4 1. A12 (−3), A13 (2) 1 Hence, L = 3 −2 1 obtain y = −3 −2 2. A23 (2) 00 1 0 . We now solve the triangular systems Ly = b and U x = y. From Ly = b, we −2 1 −1/12 . Then U x = y yields x = 1/3 . 1/2 191 24. We have 43 00 8 1 20 0 5 36 0 0 −5 7 4 3 00 4 3 00 4 30 1 0 −5 2 0 2 0 −5 2 0 3 0 −5 2 ∼ ∼ ∼ 0 5 3 6 0 0 5 6 0 05 0 0 −5 7 0 0 −5 7 0 00 1. A12 (−2) 2. A23 (1) 0 0 = U. 6 13 3. A34 (1) 1 0 00 2 1 0 0 . We The only nonzero multipliers are m21 = 2, m32 = −1, and m43 = −1. Hence, L = 0 −1 1 0 0 0 −1 1 2 −1 now solve the triangular systems Ly = b and U x = y. From Ly = b, we obtain y = −1 . Then U x = y 4 677/1300 −9/325 yields x = −37/65 . 4/13 25. We have 2 −1 −8 3 1 ∼ 2 −1 0 −1 = U =⇒ m21 = −4 =⇒ L = 10 −4 1 1. A12 (4) We now solve the triangular systems Lyi = bi , U xi = yi for i = 1, 2, 3. We have Ly1 = b1 =⇒ y1 = Ly2 = b2 =⇒ y2 = Ly3 = b3 =⇒ y3 = 26. We have 3 11 2 15 5 11 . Then U x1 = y1 =⇒ x1 = . Then U x2 = y2 =⇒ x2 = . Then U x3 = y3 =⇒ x3 = −1 42 −1 1 3 1 4 ∼ 0 0 5 −7 1 4 13 13 −4 ; −11 −6.5 ; −15 −3 . −11 2 −1 2 10 ∼ 0 11 0 1. A12 (3), A13 (5) 4 13 0 2 10 = U. 1 2. A23 (−1) Thus, m21 = −3, m31 = −5, and m32 = 1. We now solve the triangular systems Lyi = bi , for i = 1, 2, 3. We have U xi = yi . 192 1 −29/13 Ly1 = e1 =⇒ y1 = 3 . Then U x1 = y1 =⇒ x1 = −17/13 2 2 0 18/13 Ly2 = e2 =⇒ y2 = 1 . Then U x2 = y2 =⇒ x2 = 11/13 −1 −1 0 −14/13 Ly3 = e3 =⇒ y3 = 0 . Then U x3 = y3 =⇒ x3 = −10/13 1 1 ; ; . 27. Observe that if Pi is an elementary permutation matrix, then Pi−1 = Pi = PiT . Therefore, we have − −1 − − TT T T P −1 = (P1 P2 . . . Pk )−1 = Pk 1 Pk−1 . . . P2 1 P1 1 = Pk Pk−1 . . . P2 . . . P1 = (P1 P2 . . . Pk )T = P T . 28. (a): Let A be an invertible upper triangular matrix with inverse B . Therefore, we have AB = In . Write A = [aij ] and B = [bij ]. We will show that bij = 0 for all i > j , which shows that B is upper triangular. We have n aik bkj = δij . k=1 Since A is upper triangular, aik = 0 whenever i > k . Therefore, we can reduce the above summation to n aik bij = δij . k=i Let i = n. Then the above summation reduces to ann bnj = δnj . If j = n, we have ann bnn = 1, so ann = 0. For j < n, we have ann bnj = 0, and therefore bnj = 0 for all j < n. Next let i = n − 1. Then we have an−1,n−1 bn−1,j + an−1,n bnj = δn−1,j . Setting j = n−1 and using the fact that bn,n−1 = 0 by the above calculation, we obtain an−1,n−1 bn−1,n−1 = 1, so an−1,n−1 = 0. For j < n − 1, we have an−1,n−1 bn−1,j = 0 so that bn−1,j = 0. Next let i = n − 2. Then we have an−2,n−2 bn−2,j + an−2,n−1 bn−1,j + an−2,n bnj = δn−2,j . Setting j = n − 2 and using the fact that bn−1,n−2 = 0 and bn,n−2 = 0, we have an−2,n−2 bn−2,n−2 = 1, so that an−2,n−2 = 0. For j < n − 2, we have an−2,n−2 bn−2,j = 0 so that bn−2,j = 0. Proceeding in this way, we eventually show that bij = 0 for all i > j . For an invertible lower triangular matrix A with inverse B , we can either modify the preceding argument, or we can proceed more brieﬂy as follows: Note that AT is an invertible upper triangular matrix with inverse B T . By the preceding argument, B T is upper triangular. Therefore, B is lower triangular, as required. (b): Let A be an invertible unit upper triangular matrix with inverse B . Use the notations from (a). By (a), we know that B is upper triangular. We simply must show that bjj = 0 for all j . From ann bnn = 1 (see proof of (a)), we see that if ann = 1, then bnn = 1. Moreover, from an−1,n−1 bn−1,n−1 = 1, the fact that an−1,n−1 = 1 proves that bn−1,n−1 = 1. Likewise, the fact that an−2,n−2 bn−2,n−2 = 1 implies that if an−2,n−2 = 1, then bn−2,n−2 = 1. Continuing in this fashion, we prove that bjj = 1 for all j . For the last part, if A is an invertible unit lower triangular matrix with inverse B , then AT is an invertible unit upper triangular matrix with inverse B T , and by the preceding argument, B T is a unit upper triangular matrix. This implies that B is a unit lower triangular matrix, as desired. 193 29. (a): Since A is invertible, Corollary 2.6.12 implies that both L2 and U1 are invertible. Since L1 U1 = L2 U2 , − − we can left-multiply by L−1 and right-multiply by U1 1 to obtain L−1 L1 = U2 U1 1 . 2 2 − (b): By Problem 28, we know that L−1 is a unit lower triangular matrix and U1 1 is an upper triangular 2 −1 −1 matrix. Therefore, L2 L1 is a unit lower triangular matrix and U2 U1 is an upper triangular matrix. Since − these two matrices are equal, we must have L−1 L1 = In and U2 U1 1 = In . Therefore, L1 = L2 and U1 = U2 . 2 30. The system Ax = b can be written as QRx = b. If we can solve Qy = b for y and then solve Rx = y for x, then QRx = b as desired. Multiplying Qy = b by QT and using the fact that QT Q = In , we obtain y = QT b. Therefore, Rx = y can be replaced by Rx = QT b. Therefore, to solve Ax = b, we ﬁrst determine y = QT b and then solve the upper triangular system Rx = QT b by back-substitution. Solutions to Section 2.8 True-False Review: 1. FALSE. According to the given information, part (c) of the Invertible Matrix Theorem fails, while part (e) holds. This is impossible. 2. TRUE. This holds by the equivalence of parts (d) and (f) of the Invertible Matrix Theorem. 3. FALSE. Part (d) of the Invertible Matrix Theorem fails according to the given information, and therefore part (b) also fails. Hence, the equation Ax = b does not have a unique solution. But it is not valid to conclude 100 that the equation has inﬁnitely many solutions; it could have no solutions. For instance, if A = 0 1 0 000 0 and b = 0 , there are no solutions to Ax = b, although rank(A) = 2. 1 4. FALSE. An easy counterexample is the matrix 0n , which fails to be invertible even though it is upper triangular. Since it fails to be invertible, it cannot e row-equivalent to In , by the Invertible Matrix Theorem. Problems: 1. Since A is an invertible matrix, the only solution to Ax = 0 is x = 0. However, if we assume that AB = AC , then A(B − C ) = 0. If xi denotes the ith column of B − C , then xi = 0 for each i. That is, B − C = 0, or B = C , as required. 2. If rank(A) = n, then the augmented matrix A# for the system Ax = 0 can be reduced to REF such that each column contains a pivot except for the right-most column of all-zeros. Solving the system by back-substitution, we ﬁnd that x = 0, as claimed. 3. Since Ax = 0 has only the trivial solution, REF(A) contains a pivot in every column. Therefore, the linear system Ax = b can be solved by back-substitution for every b in Rn . Therefore, Ax = b does have a solution. Now suppose there are two solutions y and z to the system Ax = b. That is, Ay = b and Az = b. Subtracting, we ﬁnd A(y − z) = 0, and so by assumption, y − z = 0. That is, y = z. Therefore, there is only one solution to the linear system Ax = b. 194 4. If A and B are each invertible matrices, then A and B can each be expressed as a product of elementary matrices, say A = E 1 E2 . . . E k and B = E1 E2 . . . E l . Then AB = E1 E2 . . . Ek E1 E2 . . . El , so AB can be expressed as a product of elementary matrices. Thus, by the equivalence of (a) and (e) in the Invertible Matrix Theorem, AB is invertible. 5. We are assuming that the equations Ax = 0 and B x = 0 each have only the trivial solution x = 0. Now consider the linear system (AB )x = 0. Viewing this equation as A(B x) = 0, we conclude that B x = 0. Thus, x = 0. Hence, the linear equation (AB )x = 0 has only the trivial solution. Solutions to Section 2.9 Problems: 1. We have (−4)A − B T = 8 −16 −8 −24 4 4 −20 0 − 2. We have −3 0 2 10 2 −3 1 −3 0 2 2 1 −3 = 0 1 = 11 −18 −9 −24 4 2 −17 −1 AB = −2 426 −1 −1 5 0 16 8 6 −17 . Moreover, tr(AB ) = −1. 3. We have (AC )(AC )T = −2 26 −2 26 = 4 −52 −52 676 . 4. We have 12 0 −8 −8 (−4B )A = −4 12 0 −4 −2 426 −1 −1 5 0 −24 48 24 72 24 −24 −56 −48 . = −4 −28 52 −24 4 4 −20 0 5. Using Problem 2, we ﬁnd that (AB )−1 = 16 8 6 −17 −1 =− 1 320 −17 −8 −6 16 . . 195 6. We have CT C = −5 −6 3 1 −5 −6 3 = [71], 1 and tr(C T C ) = 71. 7. (a): We have AB = 1 2 2 5 3 7 3b −4 a = ab 3a − 5 2a + 4b 7a − 14 5a + 9b . In order for this product to equal I2 , we require 3a − 5 = 1, 7a − 14 = 0, 2a + 4b = 0, 5a + 9b = 1. We quickly solve this for the unique solution: a = 2 and b = −1. (b): We have 3 −1 2 BA = −4 2 −1 1 1 2 2 2 . = 0 0 −1 −1 1 2 2 5 3 7 8. We compute the (i, j )-entry of each side of the equation. We will denote the entries of AT by aT , which ij equals aji . On the left side, note that the (i, j )-entry of (AB T )T is the same as the (j, i)-entry of AB T , and n n (j, i)-entry of AB T = ajk bT = ki k=0 n bik aT , kj ajk bik = k=0 k=0 and the latter expression is the (i, j )-entry of BAT . Therefore, the (i, j )-entries of (AB T )T and BAT are the same, as required. 9. (a): The (i, j )-entry of A2 is n aik akj . k=1 (b): Assume that A is symmetric. That means that AT = A. We claim that A2 is symmetric. To see this, note that (A2 )T = (AA)T = AT AT = AA = A2 . Thus, (A2 )T = A2 , and so A2 is symmetric. 10. We are assuming that A is skew-symmetric, so AT = −A. To show that B T AB is skew-symmetric, we observe that (B T AB )T = B T AT (B T )T = B T AT B = B T (−A)B = −(B T AB ), as required. 196 11. We have 2 3 9 −1 −3 A2 = 0 0 = 0 0 , so A is nilpotent. 12. We have 0 A2 = 0 0 and 0 A3 = A2 A = 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1 = 0 0 0 0 0 0 0 0 , 0 so A is nilpotent. 13. We have 14. We have −3e−3t 6t2 A (t) = 6/t −7t 1 6t − t2 /2 B (t) dt = t + t2 /2 0 et −2 sec2 t tan t . − sin t −5 t3 /3 3 t 4 / 4 + 2 t3 2 π sin(πt/2) 4 t − t /4 1 0 −7 11/2 = 3/2 e−1 1/3 11/4 . 2/π 3/4 15. Since A(t) is 3 × 2 and B (t) is 4 × 2, it is impossible to perform the indicated subtraction. 16. Since A(t) is 3 × 2 and B (t) is 4 × 2, it is impossible to perform the indicated subtraction. 17. From the last equation, we see that x3 = 0. Substituting this into the middle equation, we ﬁnd that x2 = 0.5. Finally, putting the values of x2 and x3 into the ﬁrst equation, we ﬁnd x1 = −6 − 2.5 = −8.5. Thus, there is a unique solution to the linear system, and the solution set is {(−8.5, 0.5, 0)}. 18. To solve this system, we need to reduce the corresponding augmented matrix for the linear system to row-echelon form. This gives us 5 −1 2 7 1 −2 6 9 0 ∼ −7 5 −3 −7 1 4 ∼ 0 0 1 11 20 7 1 2 −2 6 9 0 ∼ 0 −7 5 −3 −7 0 11 20 7 1 5 1 7/4 1/2 ∼ 0 1 0 −13/2 0 11 20 7 1 11 20 7 3 28 49 14 ∼ 0 1 7/4 1/2 82 137 42 0 82 137 42 11 20 7 1 7/4 1/2 . 0 1 −2/13 From the last row, we conclude that x3 = −2/13, and using the middle row, we can solve for x2 : we have 7 2 2 x2 + 4 · − 13 = 1 , so x2 = 20 = 10 . Finally, from the ﬁrst row we can get x1 : we have x1 +11· 10 +20· − 13 = 2 26 13 13 21 7, and so x1 = 13 . So there is a unique solution: 21 10 2 , ,− 13 13 13 . 197 1. A21 (2) 2. A12 (2), A13 (7) 3. M2 (1/28) 4. A23 (−82) 5. M3 (−2/13) 19. To solve this system, we need to reduce the corresponding augmented matrix for the linear system to row-echelon form. This gives us 1 1 1 2 −1 1 1 2 −1 1 1 2 −1 1 2 −1 1 2 3 1 0 1 5 ∼ 0 −2 2 4 ∼ 0 1 −1 −2 ∼ 0 1 −1 −2 . 44 0 12 0 −4 48 0 −4 4 8 00 0 0 From this row-echelon form, we see that z is a free variable. Set z = t. Then from the middle row of the matrix, y = t − 2, and from the top row, x + 2(t − 2) − t = 1 or x = −t + 5. So the solution set is {(−t + 5, t − 2, t) : t ∈ R} = {(5, −2, 0) + t(−1, 1, 1) : t ∈ R}. 1. A12 (−1), A13 (−4) 2. M2 (−1/2) 3. A23 (4) 20. To solve this system, we need to reduce the corresponding augmented matrix for the linear system to row-echelon form. This gives us 1 −2 −1 30 1 −2 −1 30 1 −2 −1 3 0 1 −2 −1 30 1 2 3 −2 4 5 −5 3 ∼ 0 0 3 1 3 ∼ 0 0 3 1 3 ∼ 0 0 1 1 /3 1 . 3 −6 −6 82 0 0 −3 −1 2 0 0 005 0 0 0 0 1 The bottom row of this matrix shows that this system has no solutions. 1. A12 (2), A13 (−3) 2. A23 (1) 3. M2 (1/3), M3 (1/3) 21. To solve this system, we need to reduce the corresponding augmented matrix for the linear system to row-echelon form. This gives us 3 0 −1 2 −1 1 3 1 1 3 1 −3 2 −1 1 3 0 ∼ 4 −2 −3 6 −1 5 4 −2 0 0 0 1 4 −2 0 0 1 3 1 −3 2 33 −21 3 0 −27 −12 ∼ 0 28 14 36 18 0 0 0 1 4 1 3 1 −3 2 −1 1 1 2 −3 −3 −6 6 0 50 ∼ 0 −27 −12 33 −21 12 ∼ 0 0 0 0 1 4 −2 0 1 −3 2 −1 1 3 1 −3 2 −1 −1 2 −1 1 2 0 −9 −4 11 −7 4 ∼ −3 6 −1 5 0 −14 −7 18 −9 9 0 1 4 −2 0 0 0 1 4 −2 −1 1 3 1 −3 2 −1 12 4 0 −27 −12 33 −21 12 ∼ −18 0 1 2 −3 −3 −6 −2 0 0 0 1 4 −2 1 3 1 −3 2 31 −3 2 −1 −3 12 −3 −3 −6 7 0 1 2 −3 ∼ 8 0 42 −48 −102 −150 0 0 1 − 7 − 17 7 00 1 4 −2 000 1 4 We see that x5 = t is the only free variable. Back substitution yields the remaining values: x5 = t, x4 = −4t − 2, x3 = − 41 15 − t, 7 7 2 33 x2 = − − t, 7 7 2 16 x1 = − + t. 7 7 −1 −6 . − 25 7 −2 198 So the solution set is 2 16 2 33 41 15 − + t, − − t, − − t, −4t − 2, t 7 7 7 7 7 7 = 1. P12 t :t∈R 16 33 15 2 2 41 , − , − , −4, 1 + − , − , − , −2, 0 7 7 7 77 7 2. A12 (−3), A13 (−4) 3. M2 (3), M3 (−2) 4. A23 (1) :t∈R . 5. P23 6. A23 (27) 7. M3 (1/42) 22. To solve this system, we need to reduce the corresponding augmented matrix for the linear system to row-echelon form. This gives us 11 1 1 −3 11 1 1 −3 6 1 1 1 1 −3 6 6 1 1 1 2 −5 8 1 0 0 0 1 −2 220 0 0 1 −2 2 ∼ ∼ 2 3 1 4 −9 17 0 1 −1 2 −3 5 0 1 −1 2 −3 5 2 2 2 3 −8 14 00 0 −1 2 −2 00 00 00 11 1 1 −3 6 3 0 1 −1 2 −3 5 . ∼ 0 0 0 1 −2 2 00 00 00 From this row-echelon form, we see that x5 = t and x3 = s are free variables. Furthermore, solving this system by back-substitution, we see that x5 = t, x4 = 2t + 2, x3 = s, x2 = s − t + 1, x1 = 2t − 2s + 3. So the solution set is {(2t − 2s + 3, s − t + 1, s, 2t + 2, t) : s, t ∈ R} = {t(2, −1, 0, 2, 1) + s(−2, 1, 1, 0, 0) + (3, 1, 0, 2, 0) : s, t ∈ R}. 1. A12 (−1), A13 (−2), A14 (−2) 2. A24 (1) 3. P23 23. To solve this system, we need to reduce the corresponding augmented matrix for the linear system to row-echelon form. This gives us 1 −3 −2i 6 2i 1 2 −2 1 ∼ 1 0 −3 2i 1 6 − 6i −2 −2 + 2i 1. A12 (2i) 2 ∼ 1 −3 2i 1 1 0 1 − 6 (1 + i) − 1 3 . 1 2. M2 ( 6−6i ) From the last augmented matrix above, we see that x3 is a free variable. Let us set x3 = t, where t is a complex number. Then we can solve for x2 using the equation corresponding to the second row of the 1 1 row-echelon form: x2 = − 3 + 6 (1+ i)t. Finally, using the ﬁrst row of the row-echelon form, we can determine 1 that x1 = 2 t(1 − 3i). Therefore, the solution set for this linear system of equations is 1 11 {( t(1 − 3i), − + (1 + i)t, t) : t ∈ C}. 2 36 199 24. We reduce the corresponding linear system as follows: 1 −k 6 2 3k 1 ∼ 1 0 −k 3 + 2k 6 k − 12 . 3 If k = − 2 , then each column of the row-reduced coeﬃcient matrix will contain a pivot, and hence, the linear 3 system will have a unique solution. If, on the other hand, k = − 2 , then the system is inconsistent, because the last row of the row-echelon form will have a pivot in the right-most column. Under no circumstances will the linear system have inﬁnitely many solutions. 25. First observe that if k = 0, then the second equation requires that x3 = 2, and then the ﬁrst equation requires x2 = 2. However, x1 is a free variable in this case, so there are inﬁnitely many solutions. Now suppose that k = 0. Then multiplying each row of the corresponding augmented matrix for the linear system by 1/k yields a row-echelon form with pivots in the ﬁrst two columns only. Therefore, the third variable, x3 , is free in this case. So once again, there are inﬁnitely many solutions to the system. We conclude that the system has inﬁnitely many solutions for all values of k . 26. Since this linear system is homogeneous, it already has at least one solution: (0, 0, 0). Therefore, it only remains to determine the values of k for which this will be the only solution. We reduce the corresponding matrix as follows: 10k k 2 −k 0 1 1/2 −1/2 0 10 k −1 0 1 2 k 1 −1 0 ∼ 10k 10 −10 0 ∼ 10k 10 −10 0 2 2 1 −1 0 1 1/2 −1/2 0 10k k −k 0 1 1/2 −1/2 0 1 1/2 −1/2 0 1 1/2 −1/2 0 3 4 5 1 −1 0 ∼ 0 1 −1 0 . ∼ 0 10 − 5k 5k − 10 0 ∼ 0 2 2 0 k − 5k 4k 0 k − 5k 4k 0 0 k2 − k 0 0 0 1. M1 (k ), M2 (10), M3 (1/2) 2. P13 3. A12 (−10k ), A13 (−10k ) 1 4. M2 ( 10−5k ) 5. A23 (5k − k 2 ) Note that the steps above are not valid if k = 0 or k = 2 (because Step 1 is not valid with k = 0 and Step 4 is not valid if k = 2). We will discuss those special cases individually in a moment. However if k = 0, 2, then the steps are valid, and we see from the last row of the last matrix that if k = 1, we have inﬁnitely many solutions. Otherwise, if k = 0, 1, 2, then the matrix has full rank, and so there is a unique solution to the linear system. If k = 2, then the last two rows of the original matrix are the same, and so the matrix of coeﬃcients of the linear system is not invertible. Therefore, the linear system must have inﬁnitely many solutions. If k = 0, we reduce the original linear system as follows: 10 0 2 0 −1 0 1 0 −1/10 0 1 0 −1/10 0 1 1 2 3 1 −1 0 ∼ 0 1 −1 0 ∼ 0 1 −1 0 ∼ 0 1 −1 0 21 −1 0 0 1 −4/5 0 0 0 −1/10 0 1 −1 0 . 0 1/5 0 The last matrix has full rank, so there will be a unique solution in this case. 1. M1 (1/10) 2. A13 (−2) 3. A23 (−1) To summarize: The linear system has inﬁnitely many solutions if and only if k = 1 or k = 2. Otherwise, the system has a unique solution. 200 27. To solve this system, we need to reduce the corresponding augmented matrix for the linear system to row-echelon form. This gives us 1 −k k2 0 1 −k k2 0 1 −k k2 0 1 −k k 2 0 1 2 3 1 1 ∼ 0 1 . 0 k 0 ∼ 0 k k − k2 0 ∼ 0 1 −1 1 −1 2 0 1 −1 1 0 1 −1 1 0 k k−k 0 0 0 2k − k 2 −k 1. A12 (−1) 2. P23 3. A23 (−k ) Now provided that 2k − k 2 = 0, the system can be solved without free variables via back-substitution, and therefore, there is a unique solution. Consider now what happens if 2k − k 2 = 0. Then either k = 0 or k = 2. If k = 0, then only the ﬁrst two columns of the last augmented matrix above are pivoted, and we have a free variable corresponding to x3 . Therefore, there are inﬁnitely many solutions in this case. On the other hand, if k = 2, then the last row of the last matrix above reﬂects an inconsistency in the linear system, and there are no solutions. To summarize, the system has no solutions if k = 2, a unique solution if k = 0 and k = 2, and inﬁnitely many solutions if k = 0. 28. No, there are no common points of intersection. A common point of intersection would be indicated by a solution to the linear system consisting of the equations of the three planes. However, the corresponding augmented matrix can be row-reduced as follows: 4 4 12 14 12 1 12 1 1 2 0 1 −1 1 ∼ 0 1 −1 1 ∼ 0 1 −1 1 . 13 00 0 1 −1 −4 00 0 −5 The last row of this matrix shows that the linear system is inconsistent, and so there are no points common to all three planes. 1. A13 (−1) 2. A23 (−1) 29. (a): We have 4 −2 7 5 1 ∼ 1 −2 7/4 5 1. M1 (1/4) 1 0 2 ∼ 2. A12 (2) 7/4 17/2 1 0 3 ∼ 7/4 1 . 3. M2 (2/17) (b): We have: rank(A) = 2, since the row-echelon form of A in (a) consists two nonzero rows. (c): We have 4 −2 71 50 0 1 1 ∼ 1 −2 7/4 1/4 5 0 4 ∼ 0 1 2 ∼ 1 0 7/4 1/4 17/2 1/2 1 0 5/34 −7/34 2/17 0 1 1/17 . 0 1 3 ∼ 1 7/4 1/4 0 1 1/17 0 2/17 201 1. M1 (1/4) 2. A12 (2) 3. M2 (2/17) 4. A21 (−7/4) Thus, 7 − 34 5 34 1 17 A−1 = . 2 17 30. (a): We have 2 −7 −4 14 1 ∼ 2 −7 0 0 1. A12 (2) 1 −7/2 0 0 2 ∼ . 2. M1 (1/2) (b): We have: rank(A) = 1, since the row-echelon form of A in (a) has one nonzero row. (c): Since rank(A) < 2, A is not invertible. 31. (a): We have 1 −1/3 2 1 −1/3 2 1 −1/3 3 −1 6 1 2 3 0 2 3 ∼ 0 2 3 ∼ 0 2 2 3 ∼ 0 1 −5/3 0 0 −4/3 −2 0 0 3 −5 0 2. A13 (−1) 1. M1 (1/3), M3 (1/3) 2 1 −1/3 4 3 ∼ 0 1 0 0 0 3. A23 (2/3) 4. M2 (1/2) (b): We have: rank(A) = 2, since the row-echelon form of A in (a) consists of two nonzero rows. (c): Since rank(A) < 3, A is not invertible. 32. (a): We have 2 1 0 0 1 2 0 0 0 0 3 4 0 1 012 ∼ 4 0 3 0 2 1 0 0 1 40 ∼ 0 0 1. P12 0 0 3 4 2 1 0 0 0 1 20 0 0 2 0 −3 0 0 ∼ 4 0 03 4 3 0 0 1 −1 0 0 1 0 050 ∼ 1 −1 0 0 7 0 2. A12 (−2), A34 (−1) 3. P34 2 1 0 0 1 20 0 3 0 −3 0 0 ∼ 0 0 1 −1 0 03 4 0 0 0 0 . 1 −1 0 1 4. M2 (−1/3), A34 (−3) 5. M4 (1/7) (b): We have: rank(A) = 4, since the row-echelon form of A in (a) consists of four nonzero rows. 2 3/2 . 0 202 (c): We have 210 1 2 0 0 0 3 004 0 0 4 3 1 0 0 0 0 1 0 0 1 2 0 −3 ∼ 0 0 0 0 3 100 0 0 1 0 0 ∼ 0 0 1 −1 000 1 5 1. P12 12000100 0 012 1 0 0 1 0 0 02 ∼ ∼ 0 0 0 3 4 0 0 1 0 1 00430001 120 0 0 00 1 00 0 0 1 −2 0 040 1 0 0 ∼ 0 −1 1 0 0 1 −1 1 −1 0 3 40 000 7 0 10 2/3 −1/3 0 0 1000 −1/3 2/3 0 0 60 1 0 0 ∼ 0 0 −1 1 0 0 1 0 0001 0 0 4/7 −3/7 0 0 1 0 2. A12 (−2), A34 (−1) 3. P34 4. A34 (−3), M2 (−1/3) 0 1 00 1 −2 0 0 0 0 1 0 0 0 −1 1 0 1 0 0 −1/3 2/3 0 0 0 0 −1 1 0 0 4 −3 2/3 −1/3 0 0 −1/3 2/3 0 0 . 0 0 −3/7 4/7 0 0 4/7 −3/7 1 20 0 0 −3 0 0 0 03 4 0 0 1 −1 5. M4 (1/7), A21 (−2) 6. A43 (1) Thus, 2/3 −1/3 0 0 −1/3 2/3 0 0 . = 0 0 −3/7 4/7 0 0 4/7 −3/7 A−1 33. (a): We have 1 0 0 1 0 0 1 0 0 1 00 1 3 0 0 1 2 3 4 5 0 2 −1 ∼ 0 2 −1 ∼ 0 −1 2 ∼ 0 −1 2 ∼ 0 2 −1 ∼ 0 1 −1 2 0 −1 2 0 2 −1 0 03 0 1 −1 2 1. M1 (1/3) 2. A13 (−1) 3. P23 4. A23 (2) 5. M2 (−1), M3 (1/3) (b): We have: rank(A) = 3, since the row-echelon form of A in (a) has 3 nonzero rows. (c): We have 3 0 0100 1 0 1 0 2 −1 0 1 0 ∼ 0 2 1 −1 2001 1 −1 1 0 0 1/3 3 2 −1/3 ∼ 0 −1 0 2 −1 0 10 0 1/3 0 5 0 ∼ 0 1 −2 1/3 00 1 −2/9 1/3 0 1/3 0 −1 0 1 20 0 00 1 4 0 1 ∼ 0 10 0 0 1 6 −1 ∼ 0 2/3 0 0 1 0 0 1/3 2 0 ∼ 0 2 −1 0 1 0 −1 2 −1/3 0 0 1/3 0 0 −1 2 −1/3 0 1 0 3 −2/3 1 2 0 0 1/3 0 0 1 0 −1/9 2/3 1/3 . 0 1 −2/9 1/3 2/3 00 1 0 01 0 0 1 −2 . 0 1 203 2. A13 (−1) 1. M1 (1/3) 3. P23 5. M2 (−1), M3 (1/3) 4. A23 (2) 6. A32 (2) Hence, A−1 1/3 0 0 = −1/9 2/3 1/3 . −2/9 1/3 2/3 34. (a): We have −2 −3 1 1 42 14 1 2 1 4 2 ∼ −2 −3 1 ∼ 0 5 0 53 0 53 05 1. P12 2. A12 (2) 2 1 3 5 ∼ 0 3 0 3. A23 (−1) 4 2 14 4 5 5 ∼ 0 1 0 −2 00 2 1 . 1 4. M2 (1/5), M3 (−1/2) (b): We have: rank(A) = 3, since the row-echelon form of A in (a) consists of 3 nonzero rows. (c): We have −2 −3 1 1 0 0 1 420 1 1 4 2 0 1 0 ∼ −2 −3 1 1 0 53001 0 530 0 10 1 14 2 4 3 5 1 2 0 ∼ 0 ∼ 0 5 0 0 −2 −1 −2 1 0 1 0 −2 −4/5 −3/5 0 1 5 6 1 1/5 2/5 0 ∼ 0 ∼ 0 1 00 1 1/2 1 −1/2 0 1. P12 2. A12 (2) 3. A23 (−1) 1 0 0 0 1 2 0 ∼ 0 1 0 42 0 1 1 1/5 0 1 1/2 201 512 300 1 0 2/5 0 1 −1/2 00 1/5 1 0 −3/10 01 1/2 4. M2 (1/5), M3 (−1/2) 4 5 5 0 0 1 7/5 −1 −3/5 1/2 . 1 −1/2 5. A21 (−4) 6. A31 (2), A32 (−1) Thus, A−1 35. We use the Gauss-Jordan 1 −1 3 1 0 4 −3 13 0 1 1 1400 1 −1 3 10 3 1 1 −4 1 ∼ 0 0 0 1 −7 2 1/5 7/5 −1 = −3/10 −3/5 1/2 . 1/2 1 −1/2 method to ﬁnd A−1 : 0 1 −1 3 1 0 ∼ 0 11 1 0 21 0 104 4 0 ∼ 0 1 1 −1 001 1. A12 (−4), A13 (−1) 2. A23 (−2) 1 −4 −1 −3 −4 −7 0 1 2 0 ∼ 0 1 0 1 0 5 1 0 ∼ 2 −1 0 1 0 3. M3 (−1) −1 3 1 00 1 0 1 1 −4 0 −1 7 −2 1 1 0 0 25 −7 4 010 3 −1 1 . 0 0 1 −7 2 −1 4. A21 (1) 5. A31 (−4), A32 (−1) 204 Thus, A−1 Now xi = A−1 ei for each i. So 25 x1 = A−1 e1 = 3 , −7 25 −7 4 1 . = 3 −1 −7 2 −1 −7 x2 = A−1 e2 = −1 , 2 4 x3 = A−1 e3 = 1 . −1 36. We have xi = A−1 bi , where A−1 = − 1 39 −2 −5 −7 2 . Therefore, x1 = A−1 b1 = − 1 39 −2 −5 −7 2 x2 = A−1 b2 = − and x3 = A−1 b3 = − 1 39 1 39 1 2 =− −2 −5 −7 2 −2 −5 −7 2 −2 5 1 39 4 3 =− = 1 39 −23 −22 =− 1 39 1 39 −12 −3 −21 24 = 12 3 = 1 13 4 1 1 39 23 22 , 21 −24 = 1 13 = 1 39 , 7 −8 . 37. (a): We have (A−1 B )(B −1 A) = A−1 (BB −1 )A = A−1 In A = A−1 A = In and (B −1 A)(A−1 B ) = B −1 (AA−1 )B = B −1 In B = B −1 B = In . Therefore, (B −1 A)−1 = A−1 B. (b): We have (A−1 B )−1 = B −1 (A−1 )−1 = B −1 A, as required. 38. We prove this by induction on k . If k = 0, then Ak = A0 = In and S −1 D0 S = S −1 In S = S −1 S = In . Thus, Ak = S −1 Dk S when k = 0. Now assume that Ak−1 = S −1 Dk−1 S . We wish to show that Ak = S −1 Dk S . We have Ak = AAk−1 = (S −1 DS )(S −1 Dk−1 S ) = S −1 D(SS −1 )Dk−1 S = S −1 DIn Dk−1 S = S −1 DDk−1 S = S −1 Dk S, as required. 39. (a): We reduce A to the identity matrix: 4 −2 7 5 1 ∼ 1 −2 7 4 5 2 ∼ 1 0 7 4 17 2 3 ∼ 1 0 7 4 1 4 ∼ 1 0 0 1 . 205 1. M1 ( 1 ) 4 2 3. M2 ( 17 ) 2. A12 (2) 4. A21 (− 7 ) 4 The elementary matrices corresponding to these row operations are E1 = 1 4 0 0 1 , E2 = 10 21 , E3 = 1 0 0 , 2 17 7 1 −4 0 1 E4 = . We have E4 E3 E2 E1 A = I2 , so that − − − − A = E 1 1 E2 1 E3 1 E 4 1 = 4 0 0 1 1 −2 1 0 0 1 0 1 0 17 2 7 4 1 , − which is the desired expression since Ei 1 is an elementary matrix for each i. (b): We can reduce A to upper triangular form by the following elementary row operation: 4 −2 7 5 1 ∼ 4 0 7 17 2 . 1. A12 ( 1 ) 2 Therefore we have the multiplier m12 = − 1 . Hence, setting 2 L= 10 −1 1 2 and U= 4 0 7 , 17 2 we have the LU factorization A = LU , which can be easily veriﬁed by direct multiplication. 40. (a): We reduce 210 1 2 0 0 0 3 004 A to the identity 12 0 012 1 ∼ 4 0 0 00 3 100 50 1 0 ∼ 0 0 1 004 matrix: 00 0 02 ∼ 3 4 43 0 06 4 ∼ 3 3 1 1 200 0 −3 0 0 3 0 ∼ 0 0 3 4 0 0 0 043 100 0 100 010 070 1 0 4 ∼ 001 001 3 000 0 0 0 −7 3 2. A12 (−2) 3. M2 (− 1 ) 3 6. A34 (−4) 1. P12 3 M4 (− 7 ) The elementary matrices corresponding to 0100 10 1 0 0 0 −2 1 E1 = 0 0 1 0 , E2 = 0 0 0001 00 7. 2 1 0 0 0 08 4 ∼ 3 1 4. A21 (−2) 8. 0 0 3 4 1 0 040 ∼ 4 0 0 3 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 3 4 0 0 4 3 0 0 . 0 1 5. M3 ( 1 ) 3 A43 (− 4 ) 3 these row operations are 1 000 00 1 0 0 , E3 = 0 − 3 0 0 , 0 10 0 1 0 01 0 001 1 −2 0 0 0 1 0 0 E4 = 0 0 1 0 0 001 206 1 0 E5 = 0 0 0 1 0 0 0 0 , 0 1 0 0 1 3 0 0 00 1 0 0 , 0 1 0 0 −4 1 1 0 E6 = 0 0 1 0 E7 = 0 0 0 0 0 0 , 1 0 3 0 −7 0 1 0 0 100 0 0 1 0 0 E8 = 0 0 1 −4 . 3 000 1 We have E8 E7 E6 E5 E4 E3 E2 E1 A = I4 so that − − − − − − − − A = E1 1 E2 1 E 3 1 E4 1 E 5 1 E6 1 E7 1 E 8 1 1000 0100 1 0 0 0 2 1 0 0 = 0 0 1 0 0 0 1 0 0001 0001 10 1000 0 1 0 0 0 1 ··· 0 0 3 0 0 0 00 0001 1200 1 000 0 −3 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0001 0 001 100 0 00 0 0 0 0 1 0 0 1 0 0 0 1 7 41 0 0 0 −3 ··· 1 0 0 0 0 1 0 0 0 0 1 0 0 0 4 , 3 1 − which is the desired expression since Ei 1 is an elementary matrix for each i. (b): We can reduce A to upper 21 1 2 0 0 00 triangular form 2 00 0 010 ∼ 3 4 0 43 0 by the following elementary row operations: 210 0 100 3 0 0 020 3 0 2 2 . ∼ 0 0 3 4 034 043 0 0 0 −7 3 1. A12 (− 1 ) 2 Therefore, the nonzero multipliers are m12 = L= 1 1 2 0 0 0 1 0 0 0 0 1 4 3 1 2 0 0 0 1 2. A34 (− 4 ) 3 and m34 = 4 . Hence, setting 3 and 2 0 U = 0 0 1 3 2 0 0 0 0 0 0 , 3 4 0 −7 3 we have the LU factorization A = LU , which can be easily veriﬁed by direct multiplication. 41. (a): We reduce A to the identity matrix: 1 −1 2 1 −1 2 3 0 0 1 −1 2 1 −1 2 1 2 3 4 1 0 2 −1 ∼ 0 2 −1 ∼ 0 2 −1 ∼ 0 1 −1 1 −2 ∼ 0 2 1 −1 2 3 0 0 0 3 −6 0 3 −6 0 0 −9 2 1 −1 2 1 6 1 −1 ∼ 0 ∼ 0 2 0 0 1 0 5 3 10 0 0 1 2 7 8 1 1 −1 ∼ 0 1 −2 ∼ 0 2 0 00 1 0 1 0 1 0 0 0 . 1 207 1. P13 3. M2 ( 1 ) 2 2. A13 (−3) 3 7. A31 (− 2 ) 6. A21 (1) The elementary matrices corresponding to 001 1 E1 = 0 1 0 , E 2 = 0 100 −3 10 0 1 0 , E6 = 0 E5 = 0 1 2 0 0 0 −9 4. A23 (−3) 5. M3 (− 2 ) 9 8. A32 ( 1 ) 2 these row operations are 100 1 00 00 1 0 1 0 , E3 = 0 1 0 , E4 = 0 2 0 −3 1 01 001 100 10 1 0 −3 2 1 1 0 , E7 = 0 1 0 , E8 = 0 1 2 . 01 00 1 001 We have E8 E7 E6 E5 E4 E3 E2 E1 A = I3 so that − − − − − − − − A = E1 1 E2 1 E 3 1 E4 1 E 5 1 E6 1 E7 1 E 8 1 001 100 100 = 0 1 0 0 1 0 0 2 0 100 301 001 10 0 1 −1 0 0 1 ··· 0 1 9 0 0 0 0 −2 100 0 1 0 ··· 031 3 10 0 0 102 0 0 1 0 0 1 −1 , 2 1 001 00 1 − which is the desired expression since Ei 1 is an elementary matrix for each i. (b): We can reduce A to upper triangular form by the following elementary row operations: 30 0 3 0 0 3 0 0 1 2 0 2 −1 ∼ 0 2 −1 ∼ 0 2 −1 . 3 1 −1 2 0 −1 2 00 2 1. A13 (− 1 ) 3 Therefore, the nonzero multipliers are m13 1 0 1 L= 0 1 −1 3 2 2. A23 ( 1 ) 2 and m23 = − 1 . Hence, setting 2 0 30 0 0 and U = 0 2 −1 , 3 1 00 2 = 1 3 we have the LU factorization A = LU , which can be veriﬁed by direct multiplication. 42. (a): We reduce A to the identity matrix: −2 −3 1 1 42 1 1 4 2 ∼ −2 −3 1 0 53 0 53 14 2 14 2 6 5 ∼ 0 1 −8 ∼ 0 1 −8 0 0 45 00 1 14 2 14 2 14 3 4 ∼ 0 5 5 ∼ 0 5 5 ∼ 0 1 0 5 −3 0 1 −8 05 1 0 34 1 0 34 10 7 8 9 ∼ 0 1 −8 ∼ 0 1 0 ∼ 0 1 00 1 001 00 2 2 −8 5 0 0 . 1 208 1. P12 6. 2. A12 (2) 1 M3 ( 45 ) 3. A23 (−1) 7. A21 (−4) The elementary matrices corresponding to these row 010 1 E1 = 1 0 0 , E 2 = 2 001 0 10 E4 = 0 0 01 1 −4 1 E7 = 0 0 0 4. P23 8. A32 (8) 5. A23 (−5) 9. A31 (−34) operations are 00 1 00 1 0 , E3 = 0 1 0 , 01 0 −1 1 0 1 0 1 , E5 = 0 1 0 0 −5 10 0 0 , E8 = 0 1 00 1 100 0 0 , E6 = 0 1 0 , 1 1 0 0 45 1 0 −34 0 0 . 8 , E9 = 0 1 00 1 1 We have E9 E8 E7 E6 E5 E4 E3 E2 E1 A = I3 so that − − − − − − − − − A = E1 1 E 2 1 E3 1 E 4 1 E5 1 E6 1 E 7 1 E8 1 E 9 1 100 100 010 = 1 0 0 −2 1 0 0 1 0 011 001 001 1 100 100 ··· 0 1 0 0 1 0 0 0 0 0 45 051 1 0 0 4 1 0 0 1 ··· 0 1 10 0 0 0 0 1 −8 0 0 00 1 1 0 0 1 0 1 0 34 0 , 1 − which is the desired expression since Ei 1 is an elementary matrix for each i. (b): We can reduce A to upper triangular form by the following elementary row operations: −2 −3 1 −2 −3 1 −2 −3 1 1 2 5 5 5 5 1 4 2 ∼ 0 ∼ 0 . 2 2 2 2 0 53 0 53 0 0 −2 1 Therefore, the nonzero multipliers are m12 = − 2 and m23 = 2. Hence, setting 00 L = −1 1 0 2 021 1 and −2 −3 1 5 5 U = 0 , 2 2 0 0 −2 we have the LU factorization A = LU , which can be veriﬁed by direct multiplication. 43. (a): Note that (A+B )3 = (A+B )2 (A+B ) = (A2 +AB +BA+B 2 )(A+B ) = A3 +ABA+BA2 +B 2 A+A2 B +AB 2 +BAB +B 3 . 209 (b): We have (A−B )3 = (A−B )2 (A−B ) = (A2 −AB −BA+B 2 )(A−B ) = A3 −ABA−BA2 +B 2 A−A2 B +AB 2 +BAB −B 3 . (c): The answer is 2k , because each term in the expansion of (A + B )k consists of a string of k matrices, each of which is either A or B (2 possibilities for each matrix in the string). Multiplying the possibilities for each position in the string of length k , we get 2k diﬀerent strings, and hence 2k diﬀerent terms in the expansion of (A + B )k . 44. We claim that A 0 0 B −1 −1 A−1 0 = 0 B . To see this, simply note that A 0 0 B −1 0 B = In 0 0 Im = In+m A−1 0 and A−1 0 A 0 0 B −1 = In 0 0 Im = In+m . 0 B 45. For a 2 × 4 matrix, the leading ones can occur in 6 diﬀerent positions: 1∗∗∗ 01∗∗ , 1∗∗∗ 001∗ For a 3 × 4 matrix, 1 0 0 , 1∗∗∗ 0001 the leading ones can ∗∗∗ 1∗ 1 ∗ ∗ , 0 1 01∗ 00 , 0 0 1∗∗ 01∗ , 1∗∗ 001 0 0 occur in 4 diﬀerent positions: ∗∗ 1∗∗∗ 0 ∗ ∗ , 0 0 1 ∗ , 0 01 0001 0 , 001∗ 0001 1∗∗ 0 1 ∗ 001 For a 4 × 6 matrix, the leading ones can occur in 15 diﬀerent positions: 1∗∗∗∗∗ 1∗∗∗∗∗ 1∗∗∗∗∗ 1∗∗∗∗∗ 0 1 ∗ ∗ ∗ ∗0 1 ∗ ∗ ∗ ∗0 1 ∗ ∗ ∗ ∗0 1 ∗ ∗ ∗ ∗ 0 0 1 ∗ ∗ ∗ , 0 0 1 ∗ ∗ ∗ , 0 0 1 ∗ ∗ ∗ , 0 0 0 1 ∗ ∗ 0001∗∗ 00001∗ 000001 00001∗ 1∗∗ 0 1 ∗ 0 0 0 000 1∗∗ 0 0 1 0 0 0 000 ∗∗∗ ∗ ∗ ∗ , 1 ∗ ∗ 001 ∗∗∗ ∗ ∗ ∗ , 0 1 ∗ 001 1∗∗∗∗∗ 1∗∗∗∗∗ 0 1 ∗ ∗ ∗ ∗0 0 1 ∗ ∗ ∗ , 0 0 0 0 1 ∗0 0 0 1 ∗ ∗ 000001 00001∗ 1∗∗∗∗∗ 01∗∗∗∗ 0 0 0 1 ∗ ∗0 0 1 ∗ ∗ ∗ , 0 0 0 0 1 ∗0 0 0 1 ∗ ∗ 000001 00001∗ 1∗∗∗∗ 0 0 1 ∗ ∗ , 0 0 0 1 ∗ 00000 01∗∗∗ 0 0 1 ∗ ∗ , 0 0 0 1 ∗ 00000 , ∗ ∗ , ∗ 1 ∗ ∗ , ∗ 1 210 0 0 0 0 1∗∗∗∗ 0 1 ∗ ∗ ∗ , 0 0 0 1 ∗ 00001 0 0 0 0 1∗∗∗∗ 001∗∗∗ 0 0 1 ∗0 0 0 1 ∗ ∗ , 0 0 0 1 ∗0 0 0 0 1 ∗ 00001 000001 For an m × n matrix with m ≤ n, the answer is the binomial coeﬃcient C (n, m) = n m = n! . m!(n − m)! This represents n “choose” m, which is the number of ways to choose m columns from the n columns of the matrix in which to put the leading ones. This choice then determines the structure of the matrix. 46. The inverse of A10 is B 5 . To see this, we use the fact that A2 B = In = BA2 as follows: A10 B 5 = A8 (A2 B )B 4 = A8 In B 4 = A8 B 4 = A6 (A2 B )B 3 = A6 In B 3 = A6 B 3 = A4 (A2 B )B 2 = A4 In B 2 = A4 B 2 = A2 (A2 B )B = A2 In B = A2 B = In and B 5 A10 = B 4 (BA2 )A8 = B 4 In A8 = B 4 A8 = B 3 (BA2 )A6 = B 3 In A6 = B 3 A6 = B 2 (BA2 )A4 = B 2 In A4 = B 2 A4 = B (BA2 )A2 = BIn A2 = BA2 = In . Solutions to Section 3.1 True-False Review: 1. TRUE. Let A = a0 bc . Then det(A) = ac − b0 = ac, which is the product of the elements on the main diagonal of A. abc 2. TRUE. Let A = 0 d e . Using the schematic of Figure 3.1.1, we have 00f det(A) = adf + be0 + c00 − 0dc − 0ea − f 0b = adf, which is the product of the elements on the main diagonal of A. 3. FALSE. The volume of this parallelepiped is determined by the absolute value of det(A), since det(A) could very well be negative. 4. TRUE. There are 12 of each. The ones of even parity are (1, 2, 3, 4), (1, 3, 4, 2), (1, 4, 2, 3), (2, 1, 4, 3), (2, 4, 3, 1), (2, 3, 1, 4), (3, 1, 2, 4), (3, 2, 4, 1), (3, 4, 1, 2), (4, 1, 3, 2), (4, 2, 1, 3), (4, 3, 2, 1), and the others are all of odd parity. 5. FALSE. Many examples are possible here. If we take A = det(B ) = 0, but A + B = I2 , and det(I2 ) = 1 = 0. 1 0 0 0 and B = 0 0 0 1 , then det(A) = 211 6. FALSE. Many examples are possible here. If we take A = 1 3 2 4 , for example, then det(A) = 1 · 4 − 2 · 3 = −2 < 0, even though all elements of A are positive. 7. TRUE. In the summation that deﬁnes the determinant, each term in the sum is a product consisting of one element from each row and each column of the matrix. But that means one of the factors in each term will be zero, since it must come from the row containing all zeros. Hence, each term is zero, and the summation is zero. 8. TRUE. If the determinant of the 3 × 3 matrix [v1 , v2 , v3 ] is zero, then the volume of the parallelepiped determined by the three vectors is zero, and this means precisely that the three vectors all lie in the same plane. Problems: 1. σ (2, 1, 3, 4) = (−1)N (2,1,3,4) = (−1)1 = −1, odd. 2. σ (1, 3, 2, 4) = (−1)N (1,3,2,4) = (−1)1 = −1, odd. 3. σ (1, 4, 3, 5, 2) = (−1)N (1,4,3,5,2) = (−1)4 = 1, even. 4. σ (5, 4, 3, 2, 1) = (−1)N (5,4,3,2,1) = (−1)10 = 1, even. 5. σ (1, 5, 2, 4, 3) = (−1)N (1,5,2,4,3) = (−1)4 = 1, even. 6. σ (2, 4, 6, 1, 3, 5) = (−1)N (2,4,6,1,3,5) = (−1)6 = 1, even. 7. det(A) = a11 a21 a12 a22 8. det(A) = 1 −1 2 3 = 1 · 3 − (−1)2 = 5. 9. det(A) = 2 −1 6 −3 = 2(−3) − (−1)6 = 0. 10. det(A) = −4 −1 11. det(A) = 1 −1 0 2 3 6 0 2 −1 12. det(A) = 2 4 9 13. det(A) = 0 0 2 0 −4 1 −1 5 −7 = σ (1, 2)a11 a22 + σ (2, 1)a12 a21 = a11 a22 − a12 a21 . 10 8 1 2 5 5 3 1 = −4 · 8 − 10(−1) = −22. = 1 · 3(−1) + (−1)6 · 0 + 0 · 2 · 2 − 0 · 3 · 0 − 6 · 2 · 1 − (−1)(−1)2 = −17. = 2 · 2 · 1 + 1 · 3 · 9 + 5 · 4 · 5 − 5 · 2 · 9 − 3 · 5 · 2 − 1 · 1 · 4 = 7. = 0(−4)(−7) + 0 · 1(−1) + 2 · 0 · 5 − 2(−4)(−1) − 1 · 5 · 0 − (−7)0 · 0 = −8. 123 4 056 7 14. det(A) = = 400, since of the 24 terms in the expression (3.1.3) for the determinant, 008 9 0 0 0 10 only the term σ (1, 2, 3, 4)a11 a22 a33 a44 = 400 contains all nonzero entries. 212 0020 5000 = −60, since of the 24 terms in the expression (3.1.3) for the determinant, 15. det(A) = 0003 0200 only the term σ (3, 1, 4, 2)a13 a21 a34 a42 contains all nonzero entries, and since σ (3, 1, 4, 2) = −1, we obtain σ (3, 1, 4, 2)a13 a21 a34 a42 = (−1) · 2 · 5 · 3 · 2 = −60. 16. π √ 2 17. 2 1 3 18. 3 2 −1 π2 2π = 2π 2 − 3 −1 4 1 1 6 2 6 1 −1 1 4 √ 2π 2 = (2 − √ 2)π 2 . = (2)(4)(6) + (3)(1)(3) + (−1)(1)(1) − (3)(4)(−1) − (1)(1)(2) − (6)(1)(3) = 48. = (3)(1)(4) + (2)(−1)(−1) + (6)(2)(1) − (−1)(1)(6) − (1)(−1)(3) − (4)(2)(2) = 19. 236 0 1 2 = (2)(1)(0) + (3)(2)(1) + (6)(0)(5) − (1)(1)(6) − (5)(2)(2) − (0)(0)(3) = −20. 150 √ e2 e−1 √ √π √ √ 20. 67 1/30 2001 = ( π )(1/30)(π 3 )+(e2 )(2001)(π )+(e−1 )( 67)(π 2 )−(π )(1/30)(e−1 )−(π 2 )(2001)( π )− π π2 π3 √ (π 3 )( 67)(e2 ) ≈ 9601.882. 19. e2t e3t e−4t 2t 3t 3e −4e−4t = (e2t )(3e3t )(16e−4t )+(e3t )(−4e−4t )(4e2t )+(e−4t )(2e2t )(9e3t )−(4e2t )(3e3t )(e−4t )− 21. 2e 2t 3t 4e 9e 16e−4t 3t −4t 2t (9e )(−4e )(e ) − (16e−4t )(2e2t )(e3t ) = 42et . 22. y1 − y1 + 4y1 − 4y1 = 8 sin 2x + 4 cos 2x − 8 sin 2x − 4 cos 2x = 0, y2 − y2 + 4y2 − 4y2 = −8 cos 2x + 4 sin 2x + 8 cos 2x − 4 sin 2x = 0, y3 − y3 + 4y3 − 4y3 = ex − ex + 4ex − 4ex = 0. y 1 y2 y3 cos 2x sin 2x ex y1 y2 y3 = −2 sin 2x 2 cos 2x ex y 1 y2 y 3 −4 cos 2x −4 sin 2x ex = 2ex cos2 2x − 4ex sin 2x cos 2x + 8ex sin2 2x + 8ex cos2 2x + 4ex sin 2x cos 2x + 2ex sin2 2x = 10ex . 23. (a): y1 − y1 − y1 + y1 = ex − ex − ex + ex = 0, y2 − y2 − y2 + y2 = sinh x − cosh x − sinh x + cosh x = 0, y3 − y3 − y3 + y3 = cosh x − sinh x − cosh x + sinh x = 0. y1 y1 y1 y2 y2 y2 y3 y3 y3 = ex ex ex cosh x sinh x sinh x cosh x cosh x sinh x 213 = ex sinh2 x + ex cosh2 x + ex sinh x cosh x − ex sinh2 x − ex cosh2 x − ex sinh x cosh x = 0. (b): The formulas we need are cosh x = ex + e−x 2 and sinh x = ex − e−x . 2 Adding the two equations, we ﬁnd that cosh x + sinh x = ex , so that −ex + cosh x + sinh x = 0. Therefore, we may take d1 = −1, d2 = 1, and d3 = 1. 24. (a): S4 = {1, 2, 3, 4} (1, 2, 3, 4) (1, 2, 4, 3) (1, 3, 2, 4) (1, 3, 4, 2) (1, 4, 2, 3) (1, 4, 3, 2) (2, 1, 3, 4) (2, 1, 4, 3) (2, 3, 1, 4) (2, 3, 4, 1) (2, 4, 1, 3) (2, 4, 3, 1) (3, 1, 2, 4) (3, 1, 4, 2) (3, 2, 1, 4) (3, 2, 4, 1) (3, 4, 1, 2) (3, 4, 2, 1) (4, 1, 2, 3) (4, 1, 3, 2) (4, 2, 1, 3) (4, 2, 3, 1) (4, 3, 1, 2) (4, 3, 2, 1) (b): N (1, 2, 3, 4) = 0, σ (1, 2, 3, 4) = 1, even; N (1, 2, 4, 3) = 1, σ (1, 2, 4, 3) = −1, odd; N (1, 3, 2, 4) = 1, σ (1, 3, 2, 4) = −1, odd; N (1, 3, 4, 2) = 2, σ (1, 3, 4, 2) = 1, even; N (1, 4, 2, 3) = 2, σ (1, 4, 2, 3) = 1, even; N (1, 4, 3, 2) = 3, σ (1, 4, 3, 2) = −1, odd; N (2, 1, 3, 4) = 1, σ (2, 1, 3, 4) = −1, odd; N (2, 1, 4, 3) = 2, σ (2, 1, 4, 3) = 1, even; N (2, 3, 1, 4) = 2, σ (2, 3, 1, 4) = 1, even; N (2, 3, 4, 1) = 3, σ (2, 3, 4, 1) = −1, odd; N (2, 4, 1, 3) = 3, σ (2, 4, 1, 3) = −1, odd; N (2, 4, 3, 1) = 4, σ (2, 4, 3, 1) = 1, even; N (3, 1, 2, 4) = 2, σ (3, 1, 2, 4) = 1, even; N (3, 1, 4, 2) = 3, σ (3, 1, 4, 2) = −1, odd; N (3, 2, 1, 4) = 3, σ (3, 2, 1, 4) = −1, odd; N (3, 2, 4, 1) = 4, σ (3, 2, 4, 1) = 1, even; N (3, 4, 1, 2) = 4, σ (3, 4, 1, 2) = 1, even; N (3, 4, 2, 1) = 5, σ (3, 4, 2, 1) = −1, odd; N (4, 1, 2, 3) = 3, σ (4, 1, 2, 3) = −1, odd; N (4, 1, 3, 2) = 4, σ (4, 1, 3, 2) = 1, even; N (4, 2, 1, 3) = 4, σ (4, 2, 1, 3) = 1, even; N (4, 2, 3, 1) = 5, σ (4, 2, 3, 1) = −1, odd; N (4, 3, 1, 2) = 5, σ (4, 3, 1, 2) = −1, odd; N (4, 3, 2, 1) = 6, σ (4, 3, 2, 1) = 1, even. (c): det(A) = a11 a22 a33 a44 − a11 a22 a34 a43 − a11 a23 a32 a44 + a11 a23 a34 a42 + a11 a24 a32 a43 − a11 a24 a33 a42 − a12 a21 a33 a44 + a12 a21 a34 a43 + a12 a23 a31 a44 − a12 a23 a34 a41 − a12 a24 a31 a43 + a12 a24 a33 a41 + a13 a21 a32 a44 − a13 a21 a34 a42 − a13 a22 a31 a44 + a13 a22 a34 a41 + a13 a24 a31 a42 − a13 a24 a32 a41 − a14 a21 a32 a43 + a14 a21 a33 a42 + a14 a22 a31 a43 − a14 a22 a33 a41 − a14 a23 a31 a42 + a14 a23 a32 a41 214 25. det(A) = 1 −1 0 1 3 025 2 103 9 −1 2 1 1 · 0 · 0 · 1 − 1 · 0 · 2 · 3 − 1 · 1 · 2 · 1 + 1 · 1 · 2 · 5 + 1(−1)2 · 3 − 1(−1)0 · 5 −3(−1)0 · 1 + 3(−1)2 · 3 + 3 · 1 · 0 · 1 − 3 · 1 · 2 · 1 − 3(−1)0 · 3 + 1(−1)0 · 1 +2(−1)2 · 1 − 2(−1)2 · 5 − 2 · 0 · 0 · 1 + 2 · 0 · 2 · 1 + 2(−1)0 · 5 − 2(−1)2 · 1 −9(−1)2 · 3 + 9(−1)0 · 5 + 9 · 0 · 0 · 3 − 9 · 0 · 0 · 1 − 9 · 1 · 0 · 5 + 9 · 1 · 2 · 1 = 70. = 26. det(A) = 1 3 2 −2 1 0 1 1 −2 3 3 1 2 3 5 −2 1 · 1 · 1(−2) − 1 · 1 · 5 · 2 − 1 · 3(−2)(−2) + 1 · 3 · 5 · 3 + 1 · 3(−2)2 − 1 · 3 · 1 · 3 −3 · 1 · 1(−2) + 3 · 1 · 5 · 2 + 3 · 3 · 0(−2) − 3 · 3 · 5 · 1 − 3 · 3 · 0 · 2 + 3 · 3 · 1 · 1 +2 · 1(−2)(−2) − 2 · 1 · 5 · 3 − 2 · 1 · 0(−2) + 2 · 1 · 5 · 1 + 2 · 3 · 0 · 3 − 2 · 3(−2)1 −(−2)1(−2)2 + (−2)1 · 1 · 3 + (−2)1 · 0 · 2 − (−2) · 1 · 1 · 1 − (−2)3 · 0 · 3 + (−2)3(−2)1 = 0. = 27. 0123 2034 det(A) = 3405 4560 = 0·0·0·0−0·0·6·5−0·4·3·0+0·4·6·4+0·5·3·5−0·5·0·4 −2 · 1 · 0 · 0 + 2 · 1 · 6 · 5 + 2 · 4 · 2 · 0 − 2 · 4 · 6 · 3 − 2 · 5 · 2 · 5 + 2 · 5 · 0 · 3 +3 · 1 · 3 · 0 − 3 · 1 · 6 · 4 − 3 · 0 · 2 · 0 + 3 · 0 · 6 · 3 + 3 · 5 · 2 · 4 − 3 · 5 · 3 · 3 −4 · 1 · 3 · 5 + 4 · 1 · 0 · 4 + 4 · 0 · 2 · 5 − 4 · 0 · 0 · 3 − 4 · 4 · 2 · 4 + 4 · 4 · 3 · 3 = −315. 28. In evaluating det(A) with the expression (3.1.3), observe that the only nonzero terms in the summation occur when p5 = 5. Such terms include the factor a55 = 7, which is multiplied by each corresponding term from the 4 × 4 determinant calculated in Problem 27. Therefore, by factoring a55 out of the expression for the determinant, we are left with the determinant of the corresponding 4 × 4 matrix appearing in Problem 27. Therefore, the answer here is 7 · (−315) = −2205. 29. (a): det(cA) = ca11 ca21 ca12 ca22 = (ca11 )(ca22 ) − (ca12 )(ca21 ) = c2 a11 a22 − c2 a12 a21 = c2 (a11 a22 − a12 a21 ) = c2 det(A). 215 (b): det(cA) σ (p1 , p2 , p3 , . . . , pn )ca1p1 ca2p2 ca3p3 · · · canpn = = cn σ (p1 , p2 , p3 , . . . , pn )a1p1 a2p2 a3p3 · · · anpn n = c det(A), where each summation above runs over all permutations σ of {1, 2, 3, . . . , n}. 30. a11 a25 a33 a42 a54 . All row and column indices are distinct, so this is a term of an order 5 determinant. Further, N (1, 5, 3, 2, 4) = 4, so that σ (1, 5, 3, 2, 4) = (−1)4 = +1. 31. a11 a23 a34 a43 a52 . This is not a possible term of an order 5 determinant, since the column indices are not distinct. 32. a13 a25 a31 a44 a42 . This is not a possible term of an order 5 determinant, since the row indices are not distinct. 33. a11 a32 a24 a43 a55 . This is a possible term of an order 5 determinant. N (1, 4, 2, 3, 5) = 2 =⇒ σ (1, 4, 2, 3, 5) = (−1)2 = +1. 34. a13 ap4 a32 a2q = a13 a2q a32 ap4 . We must choose p = 4 and q = 1 in order for the row and column indices to be distinct. N (3, 1, 2, 4) = 2 so that σ (3, 1, 2, 4) = (−1)2 = +1. 35. a21 a3q ap2 a43 = ap2 a21 a3q a43 . We must choose p = 1 and q = 4. N (2, 1, 4, 3) = 2 and σ (2, 1, 4, 3) = (−1)2 = +1. 36. a3q ap4 a13 a42 . We must choose p = 2 and q = 1. N (3, 4, 1, 2) = 4 and σ (3, 4, 1, 2) = (−1)4 = +1. 37. apq a34 a13 a42 . We must choose p = 2 and q = 1. N (3, 1, 4, 2) = 3 and σ (3, 1, 4, 2) = (−1)3 = −1. 38. (a): 123 = 1, 132 = 3 3 −1, 213 = −1, 231 = 1, 312 = 1, 321 = −1. 3 (b): Consider ijk a1i a2j a3k . The only nonzero terms arise when i, j, and k are distinct. Conse- i=1 j =1 k=1 quently, 3 3 3 ijk a1i a2j a3k = 123 a11 a22 a33 + 132 a11 a23 a32 + 213 a12 a21 a33 i=1 j =1 k=1 + 231 a12 a23 a31 + 312 a13 a21 a32 + 321 a13 a22 a31 = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − a11 a23 a32 − a12 a21 a33 − a13 a22 a31 = det(A). 39. From the given term, we have N (n, n − 1, n − 2, . . . , 1) = 1 + 2 + 3 + · · · + (n − 1) = n(n − 1) , 2 because the series of (n − 1) terms is just an arithmetic series which has a ﬁrst term of one, common diﬀerence of one, and last term (n − 1). Thus, σ (n, n − 1, n − 2, . . . , 1) = (−1)n(n−1)/2 . 216 Solutions to Section 3.2 True-False Review: 1. FALSE. The determinant of the matrix will increase by a factor of 2n . For instance, if A = I2 , then 20 det(A) = 1. However, det(2A) = det = 4, so the determinant in this case increases by a factor of 02 four. 2. TRUE. In both cases, the determinant of the matrix is multiplied by a factor of c. 3. TRUE. This follows by repeated application of Property (P8): det(A5 ) = det(AAAAA) = (detA)(detA)(detA)(detA)(detA) = (detA)5 . 4. TRUE. Since det(A2 ) = (detA)2 and since det(A) is a real number, det(A2 ) must be nonnegative. 5. FALSE. The matrix is not invertible if and only if its determinant, x2 y − xy 2 = xy (x − y ), is zero. For example, if x = y = 1, the matrix is not invertible since x = y ; however, neither x nor y is zero in this case. We conclude that the statement is false. 6. TRUE. We have det(AB ) = (detA)(detB ) = (detB )(detA) = det(BA). Problems: From this point on, P2 will denote an application of the property of determinants that states that if every element in any row (column) of a matrix A is multiplied by a number c, then the determinant of the resulting matrix is c det(A). 1. 1 23 2 64 3 −5 2 1 = 1 2 3 0 2 −2 0 −11 −7 2 =2 1. A12 (−2), A13 (−3) 2. 2 −1 4 3 21 −2 14 1 = 2 −1 4 3 21 0 08 2 −1 4 1 2 1 3 6 12 1 =− 4 −1 2 4 = −45 2. P2 1 −1/2 2 2 21 =2 3 0 08 1. A13 (1) 3. 2 1 1 −1 0 0 1 3 =2 0 0 1 2 3 0 1 −1 0 −11 −7 6 3 12 2. P2 2 =− 26 13 01 2 3 1 −1 0 −18 = 2(−18) = −36. 3. A23 (11) 1 −1/2 2 3 7/2 −5 =2 0 0 0 8 = 2 · 28 = 56. 3. A12 (−3) −1 0 0 26 5 15 9 36 3 = −5 · 9 = (−45)(−1) = 45. −1 2 01 01 6 3 4 217 1. P12 4. 0 1 −2 −1 0 3 2 −3 0 2. A12 (2), A13 (4) 1 0 −3 0 1 −2 2 −3 0 1 = 5. 3 5 2 7 1 9 −6 1 3 1 = 1 6 −2 5 9 −6 21 3 1 0 0 3 = 0 −3 1 −2 0 0 = 0. 3. A23 (3) 1 6 −2 0 −21 4 0 −11 7 2 = 16 −2 0 1 −10 0 0 −103 4 = 1. A31 (−1) 1 0 −3 0 1 −2 0 −3 6 2 = 2. A13 (−2) 1. P12 , P2 4. A23 (−1) 3. P2 3 = 1 6 −2 0 1 −10 0 −11 7 = −103. 2. A12 (−5), A13 (−2) 3. A32 (−2) 4. A24 (11) 6. 1 −1 2 4 3 124 −1 132 2 142 1 = 1 −1 2 4 0 1 −1 −2 =3·4 0 0 5 6 0 1 0 −2 1 −1 2 4 0 4 −4 −8 0 0 5 6 0 3 0 −6 1 −1 2 4 0 1 −1 −2 = −12 0 0 1 0 0 0 5 6 4 2 1 −1 2 4 0 1 −1 −2 = −12 0 0 1 0 0 0 0 6 1. A12 (−3), A13 (1), A14 (−2) 5 2. P2 3. P34 1 −1 2 4 0 1 −1 −2 = −12 0 1 0 −2 0 0 5 6 3 = −12 · 6 = −72. 4. A23 (−1) 5. A34 (−5) 7. Note that in the ﬁrst step below, we extract a factor of 13 from the second row of the matrix, and we also extract a factor of 8 from the second column of the matrix. 2 26 2 1 32 104 56 40 1 4 26 −13 2 7 1 5 2 2 = 13 · 8 2 1 1 1 5 1 5 0 −9 0 −11 = −104 0 −3 0 −3 0 −6 −1 −6 3 5 = 312 15 1 5 01 0 1 00 0 −2 0 0 −1 0 4 1 4 1 2 −1 7 2 7 5 −1 5 1 5 −1 5 21 2 −1 = −104 27 2 7 24 1 4 2 1 5 1 5 0 1 0 1 = −104(−3) 0 −9 0 −11 0 −6 −1 −6 4 6 = 312 15 1 5 01 0 1 0 0 −1 0 00 0 −2 = 312(−1)(−2) = 624. 218 1. P2 2. P14 3. A12 (−2), A13 (−2), A14 (−2) 4. P2, P23 5. A23 (9), A24 (6) 6. P34 8. 0 1 −1 1 −1 0 11 1 −1 01 −1 −1 −1 0 2 1 −1 0 1 0 −1 1 2 =− 0 0 −3 −3 0 0 0 3 1 −1 0 1 0 −1 1 2 =− 0 0 0 3 0 0 −3 −3 4 3 1. P13 1 −1 01 0 −1 12 =− 0 1 −1 1 0 −2 −1 1 1 −1 01 −1 0 11 =− 0 1 −1 1 −1 −1 −1 0 1 3. A23 (1), A24 (−2) 2. A12 (1), A14 (1) = 9. 4. P34 9. 2 3 4 5 1 0 1 2 3 1 4 5 5 2 3 3 1 0 =− 1 2 1 1 0 =− 0 0 2 3 4 5 2 3 4 1 −1 −7 2 1 −2 3 1 2 3 12 3 4 0 1 −1 −7 =3 00 1 4 00 4 23 5 3 1 4 5 5 2 3 3 4 = 6 =3 1 0 =− 0 0 2 2 3 5 3 1 2 2 1 −2 1 −1 −7 12 3 4 0 1 −1 −7 00 3 12 00 4 23 1 0 0 0 2 3 4 1 −1 −7 0 1 4 0 0 7 = 3 · 1 · 1 · 1 · 7 = 21. 1. CP12 2. A13 (−1), A14 (−2) 3. P24 4. A23 (−2), A24 (−3) 5. P2 6. A34 (−4) 10. 2 −1 3 4 7 12 3 −2 48 6 6 −6 18 −24 −1 23 4 1 72 3 = 6 4 −2 8 6 −1 1 3 −4 =6 1 −2 −3 −4 0 9 5 7 =6 0 6 20 22 0 −1 0 −8 1 −2 −3 −4 0 −1 0 −8 = −6 0 6 20 22 0 9 5 7 1 −2 −3 −4 0 −1 0 −8 = −6 0 0 20 −26 0 0 5 −65 3 1 −2 −3 −4 0 −1 0 −8 =6 0 0 5 −65 0 0 20 −26 6 1 4 7 =6 1 −2 −3 −4 0 −1 0 −8 0 0 5 −65 0 0 0 234 2 1 −2 −3 −4 1 7 2 3 4 −2 8 6 −1 1 3 −4 5 = 6 · 1(−1) · 5 · 234 = −7020. 219 1. CP12 , P2 2. M1 (−1) 5. A23 (6), A24 (9) 6. P34 3. A12 (−1), A13 (−4), A14 (1) 4. P24 7. A34 (−4) 11. 1 −1 3 2 2 243 =7·2 3 132 −1 454 7 −1 3 4 14 246 21 134 −7 458 1 1 −1 3 2 0 4 −2 −1 = 14 0 0 −4 −3 0 3 8 6 3 1 −1 3 2 0 1 −10 −7 = 14 0 0 −4 −3 0 3 8 6 4 1 −1 3 2 0 1 −10 7 = −56 0 0 1 3/4 0 0 38 27 6 5 = 14 1 −1 3 2 0 1 −10 −7 = −56 0 0 1 3/4 0 0 0 −3/2 7 2. A12 (−2), A13 (−3), A14 (1) 1. P2 1 −1 3 2 0 4 −2 −1 = 14 0 4 −6 −4 0 3 8 6 2 5. A24 (−3) 6. P2 1 −1 3 2 0 1 −10 −7 0 0 −4 −3 0 0 38 27 = −56 · (−3/2) = 84. 3. A23 (−1) 4. A42 (−1) 7. A34 (−38) 12. 3 1 4 3 8 1 3 1 =− 4 3 8 1 −1 0 1 7 123 8 −1 6 6 7 094 16 −1 8 12 1 0 2 =− 0 0 0 −1 4 −1 −1 −1 1 0 4 =− 0 0 0 1 −1 0 1 4 42 0 0 −1 4 2 0 0 3 −1 0 00 2 = −(1)(4)(−1)(3)(2) = 24. 1 0 3 =− 0 0 0 1. P12 1 2 1 4 0 0 0 0 2 4 7 4 1 0 2 1 4 2. A12 (−3), A13 (−4), A14 (−3), A15 (−8) 13. 2 3 14. −1 1 1 −1 15. 2 3 2 16. −1 23 5 −2 1 8 −2 5 = 1; invertible. = 0; not invertible. 6 −1 5 1 0 1 1 −1 0 4 42 4 36 4 39 8 78 7 123 1 −1 0 1 8 −1 6 6 7 094 16 −1 8 12 = 14; invertible. = −8; invertible. 3. A23 (−1), A24 (−1), A25 (−2) 1 0 2 1 4 4. A34 (−1), A35 (−1) 220 17. 1 0 2 −1 3 −2 1 4 2 16 2 1 −3 4 0 1 0 2 −1 0 −2 −5 7 0 1 2 4 0 −3 2 1 = = −2 −5 7 1 24 −3 21 = 133; invertible. 11 1 1 02 0 2 0 0 −2 −2 02 2 0 = 2 0 2 0 −2 −2 2 2 0 = 16; invertible. 18. 11 1 1 −1 1 −1 1 1 1 −1 −1 −1 1 1 −1 19. 1 2 −3 5 −1 2 −3 6 2 3 −1 4 1 −2 3 −6 = 1 2 −3 5 1 −2 3 −6 =− 2 3 −1 4 1 −2 3 −6 1 2 = 0; not invertible. 1. M2 (−1) 2. P7 1k x1 b1 ,x= , and b = . According to Corollary k4 x2 b2 3.2.5, the system has unique solution if and only if det(A) = 0. But, det(A) = 4 − k 2 , so that the system has a unique solution if and only if k = ±2. 20. The system is Ax = b, where A = 1 2 k 1 2k 1 2 −k 1 = 2 −k = (3k − 1)(k +4). Consequently, the system has an inﬁnite 0 0 1 − 3k 3 61 number of solutions if and only if k = −4 or k = 1/3 (Corollary 3.2.5). 21. det(A) = 22. The given system is (1 − k )x1 + 2x2 + 2x1 + (1 − k )x2 + x1 + x2 + x3 x3 (2 − k )x3 = 0, = 0, = 0. The determinant of the coeﬃcients as a function of k is given by det(A) = 1−k 2 1 2 1−k 1 1 1 2−k = −(1 + k )(k − 1)(k − 4). Consequently, the system has an inﬁnite number of solutions if and only if k = −1, k = 1, or ,k = 4. 1k k1 11 and only if k = 0, 1. 23. det(A) = 0 1 1 = 1 + k − 1 − k 2 = k (1 − k ). Consequently, the system has a unique solution if 1 −1 2 3 1 4 = 1 · 1 · 3 + (−1)4 · 0 + 2 · 3 · 1 − 2 · 1 · 0 − 1 · 4 · 1 − (−1)3 · 3 = 14. 0 13 det(AT ) = det(A) = 14. det(−2A) = (−2)3 det(A) = −8 · 14 = −112. 24. det(A) = 221 1 −1 12 and B = . det(A) det(B ) = [3 · 1 − (−1)2][1 · 4 − 2(−2)] = 5 · 8 = 40. 2 3 −2 4 3 −2 = 3 · 16 − (−2)(−4) = 40. Hence, det(AB ) = det(A) det(B ). det(AB ) = −4 16 25. A = cosh x sinh x cosh y sinh y and B = . sinh x cosh x sinh y cosh y det(AB ) = det(A) det(B ) = (cosh2 x − sinh2 x)(cosh2 y − sinh2 y ) = 1 · 1 = 1. 26. A = 2 1 4 −1 6 2 1 1 =3 2 3 27. 3 6 9 28. 1 −3 2 −1 3 1 29. 1 + 3a 1 3 1 + 2a 1 2 2 20 1 7 13 2 1 4 −1 6 2 1 2 =3·2 2 3 1 −3 + 4 · 1 2 −1 + 4 · 2 3 1+4·3 1 = 1 = 1 1 2 1 1 2 1 7 13 = 1 1 2 −1 3 2 1 2 3 3 3a 1 3 2 + 2a 1 2 0 020 1 7 13 3 1. P2 = 0. 1 7 13 2. P2 3. P7 2 = 0. 3 2 = 0+a 2 0 1 1 2 3 2 0 3 = a·0 = 0. 1. P5 2. P2, P7 30. B is obtained from A by the following elementary row operations: (1) M1 (4), (2) M2 (3), (3) P12 . Thus, det(B ) = det(A) · 4 · 3 · (−1) = −12. 31. B is obtained from AT by the following elementary row operations: (1) A13 (3), (2) M1 (−2). Since det(AT ) = det(A) = 1, and (1) leaves the determinant unchanged, we have det(B ) = −2. 32. B is obtained from A by the following operations: (1) Interchange the two columns, (2) M1 (−1), (3) A12 (−4). Now (3) leaves the determinant unchanged, and (1) and (2) each change the sign of the determinant. Therefore, det(B ) = det(A) = 1. 33. B is obtained from A by the following row operations: (1) A13 (5), (2) M2 (−4), (3) P12 , (4) P23 . Thus, det(B ) = det(A) · (−4) · (−1) · (−1) = (−6) · (−4) = 24. 34. B is obtained from A by the following operations: (1) M1 (−3), (2) A23 (−4), (3) P12 . Thus, det(B ) = det(A) · (−3) · (−1) = (−6) · (3) = −18. 35. B is obtained from AT by the following row operations: (1) M1 (2), (2) A32 (−1), (3) A13 (−1). Thus, det(B ) = det(AT ) · 2 = (−6) · (2) = −12. 36. We have det(AB T ) = det(A)det(B T ) = 5 · 3 = 15. 37. We have det(A2 B 5 ) = (detA)2 (detB )5 = 52 · 35 = 6075. 38. We have det((A−1 B 2 )3 ) = (det(A−1 B 2 ))3 = (detA−1 )(detB )2 3 = 12 ·3 5 3 = 9 5 3 = 729 = 5.832. 125 39. We have det((2B )−1 (AB )T ) = (det((2B )−1 ))(det(AB )T ) = 40. We have det((5A)(2B )) = (5 · 54 )(3 · 24 ) = 150, 000. 41. det(A)det(B ) det(2B ) = 5·3 3 · 24 = 5 . 16 3. P7 222 (a): The volume of the parallelepiped is given by |det(A)|. In this case, we have |det(A)| = |2 + 12k + 36 − 4k − 18 − 12| = |8 + 8k |. (b): NO. The answer does not change because the determinants of A and AT are the same, so the volume determined by the columns of A, which is the same as the volume determined by the rows of AT , is |det(AT )| = |det(A)|. (c): The matrix A is invertible if and only if det(A) = 0. That is, A is invertible if and only if 8 + 8k = 0, if and only if k = −1. 42. 1 −1 x 0 3 x2 − 2x 0 0 x3 − x2 − 2x = 1 −1 x 0 3 x2 − 2x 0 3 x3 − 4x = 1 −1 x 2 1 x2 4 −1 x3 1 −1 x 0 3 x(x − 2) . 0 0 x(x − 2)(x + 1) = We see directly that the determinant will be zero if and only if x ∈ {0, −1, 2}. 43. βx − αy αx + βy = αx βx − αy βx αx + βy = αx − βy βx + αy αx βx βx αx = x2 α β β α + −βy αy βx − αy αx + βy αx −αy βx βy + + xy α β −βy αy + −α β − xy β α βx αx + −β α −βy αy + y2 = (x2 + y 2 ) α β β α + xy α β −α β + xy β α α β β α + xy α β −α β − xy α β −α β = (x2 + y 2 ) α β β α β α β −α = (x2 + y 2 ) α β −αy βy . 44. a1 + βb1 a2 + βb2 a3 + βb3 b1 + γc1 b2 + γc2 b3 + γc3 c1 + αa1 c2 + αa2 c3 + αa3 = a1 a2 a3 b1 + γc1 b2 + γc2 b3 + γc3 = a1 a2 a3 b1 b2 b3 c1 c2 c3 b1 = (1 + αβγ ) b2 b3 c1 + αa1 c2 + αa2 c3 + αa3 + β b1 βb2 βb3 b1 b2 b3 c1 c2 c3 a1 a2 a3 + αβγ c1 c2 c3 a1 a2 a3 b1 + γc1 b2 + γc2 b3 + γc3 c1 + αa1 c2 + αa2 c3 + αa3 . Now if the last expression is to be zero for all ai , bi , and ci , then it must be the case that 1 + αβγ = 0; hence, αβγ = −1. 223 45. Suppose A is a matrix with a row of zeros. We will use (P3) and (P7) to justify the fact that det(A) = 0. If A has more than one row of zeros, then by (P7), since two rows of A are the same, det(A) = 0. Assume instead that only one row of A consists entirely of zeros. Adding a nonzero row of A to the row of zeros yields a new matrix B with two equal rows. Thus, by (P7), det(B ) = 0. However, B was obtained from A by adding a multiple of one row to another row, and by (P3), det(B ) = det(A). Hence, det(A) = 0, as required. 46. A is orthogonal, so AT = A−1 . Using the properties of determinants, it follows that 1 = det(In ) = det(AA−1 ) = det(A) det(A−1 ) = det(A) det(AT ) = det(A) det(A). Therefore, det(A) = ±1. 47. (a): From the deﬁnition of determinant we have: σ (p1 , p2 , p3 . . . , pn )a1p1 a2p2 a3p3 · · · anpn . det(A) = (47.1) n! If A is lower triangular, then aij = 0 whenever i < j , and therefore the only nonzero terms in (47.1) are those with pi ≤ i for all i. Since all the pi must be distinct, the only possibility is pi = i for all i with 1 ≤ i ≤ n, and so (47.1) reduces to the single term: det(A) = σ (1, 2, 3, . . . , n)a11 a22 a33 · · · ann . (b): det(A) = 2 −1 3 5 1 221 3 014 1 201 1 = 2 16 0 0 0 13 0 0 =− −1 −8 1 0 1 201 3 1. A41 (−5), A42 (−1), A43 (−4) −3 −11 3 0 0 020 −1 −8 1 0 1 201 0 13 0 0 2 16 0 0 −1 −8 1 0 1 201 2 = 2 000 0 13 0 0 =− −1 −8 1 0 1 201 4 2. A31 (−3), A32 (−2) = −26. 3. P12 4. A21 (−16/13) 48. The problem stated in the text is wrong. It should state: “Use determinants to prove that if A is invertible and B and C are matrices with AB = AC , then det(B ) = det(C ).” To prove this, take the determinant of both sides of AB = AC to get det(AB ) = det(AC ). Thus, by Property P8, we have det(A)det(B ) = det(A)det(C ). Since A is invertible, det(A) = 0. Thus, we can cancel det(A) from the last equality to obtain det(B ) = det(C ). 49. det(S −1 AS ) = det(S −1 ) det(A) det(S ) = det(S −1 ) det(S ) det(A) = det(S −1 S ) det(A) = det(In ) det(A) = det(A). 50. No. If A were invertible, then det(A) = 0, so that det(A3 ) = det(A) det(A) det(A) = 0. 51. Let E be an elementary matrix. There are three diﬀerent possibilities for E . 224 (a): E permutes two rows: Then E is obtained from In by interchanging two rows of In . Since det(In ) = 1 and using Property P1, we obtain det(E ) = −1. (b): E adds a multiple of one row to another: Then E is obtained from In by adding a multiple of one row of In to another. Since det(In ) = 1 and using Property P3, we obtain det(E ) = +1. (c): E scales a row by k : Then E is obtained from In by multiplying a row of In by k . Since det(In ) = 1 and using Property P2, we obtain det(E ) = k . 52. We have 0= x x1 x2 y y1 y2 1 1 1 = xy1 + yx2 + x1 y2 − x2 y1 − xy2 − yx1 , which can be rewritten as x(y1 − y2 ) + y (x2 − x1 ) = x2 y1 − x1 y2 . Setting a = y1 − y2 , b = x2 − x1 , and c = x2 y1 − x1 y2 , we can express this equation as ax + by = c, the equation of a line. Moreover, if we substitute x1 for x and y1 for y , we obtain a valid identity. Likewise, if we substitute x2 for x and y2 for y , we obtain a valid identity. Therefore, we have a straight line that includes the points (x1 , y1 ) and (x2 , y2 ). 53. 1 x x2 1 y y2 1 z z2 1 = 1 x x2 0 y − x (y − x)(y + x) 0 z − x (z + x)(z − x) 1x x2 = (y − x)(z − x) 0 1 y + x 0 1 z+x 2 1x x2 = (y − x)(z − x) 0 1 y + x 0 0 z−y 3 1. A12 (−1)A13 (−1) = (y − x)(z − x)(z − y ) = (y − z )(z − x)(x − y ). 2. P2 3. A23 (−1) 54. Since A is an n × n skew-symmetric matrix, AT = −A; thus, det(AT ) = det(−A) = (−1)n det(A) = − det(A) since n is given as odd. But by P4, det(AT ) = det(A), so det(A) = − det(A) or det(A) = 0. 55. Solving b = c1 a1 + c2 a2 + · · · + cn an for c1 a1 yields c1 a1 = b − c2 a2 − · · · − cn an . Consequently, det(Bk ) can be written as det(Bk ) = det([a1 , a2 , . . . , ak−1 , b, ak+1 , . . . , an ]) = det([a1 , a2 , . . . , ak−1 , (c1 a1 + c2 a2 + · · · + cn an ), ak+1 , . . . , an ]) = c1 det([a1 , . . . , ak−1 , a1 , ak+1 , . . . , an ]) + c2 det([a1 , . . . , ak−1 , a2 , ak+1 , . . . , an ]) + · · · + ck det([a1 , . . . , ak−1 , ak , ak+1 , . . . , an ]) + · · · + cn det([a1 , . . . , ak−1 , an , ak+1 , . . . , an ]). Now by P7, all except the k th determinant are zero since they have two equal columns, so that we are left with det(Bk ) = ck det(A). 225 58. Using technology we ﬁnd that: 1 2 3 4 a 2 1 2 3 4 3 2 1 2 3 4a 34 23 12 21 = −192 + 88a − 8a2 = −8(a − 3)(a − 8). Consequently, the matrix is invertible provided a = 3, 8. 59. Using technology we ﬁnd that: 1−k 3 3 4 2−k 4 1 1 −1 − k = −(k − 6)(k + 2)2 . Consequently, the system has an inﬁnite number of solutions if and only if = 6, −2. k −5 4 1 1 . This system has solution set k = 6: In this case, the system is B x = 0, where B = 3 −4 3 4 −7 {t(1, 1, 1) : t ∈ R}. 341 k = −2: In this case, the system is C x = 0, where C = 3 4 1 . This system has solution set 341 {(r, s, −3r − 4s) : r, s ∈ R}. 60. Using technology we ﬁnd that: 19/10 −2/5 1/2 0 1/10 1/2 −1 1/2 −5 0 ; x = A−1 b = = 1/2 . 0 1/2 −1 1/2 12/5 1/10 0 1/2 −2/5 det(A) = −20; A−1 Solutions to Section 3.3 True-False Review: 1. FALSE. Because 2 + 3 = 5 is odd, the (2, 3)-cofactor is the negative of the (2, 3)-minor of the matrix. 2. TRUE. This just requires a slight modiﬁcation of the proof of Theorem 3.3.16. We compute n (A · adj(A))ij = n aik Cjk = δij · det(A). aik adj(A)kj = k=1 k=1 Therefore, A · adj(A) = det(A) · In . 3. TRUE. The Cofactor Expansion Theorem allows for expansion along any row or any column of the matrix, and in all cases, the result is the determinant of the matrix. 123 4. FALSE. For example, let A = 4 5 6 , and let c = 2. It is easy to see that the (1, 1)-entry of 789 adj(A) is −3. But the (1, 1)-entry of adj(2A) is −12, not −6. Therefore, the equality posed in this review item does not generally hold. Many other counterexamples could be given as well. 226 123 987 5. FALSE. For example, let A = 4 5 6 and B = 6 5 4 . Then A + B = 789 321 The (1, 1)-entry of adj(A + B ) is therefore 0. However, the (1, 1)-entry of adj(A) is −3, and of adj(B ) is also −3. But (−3) + (−3) = 0. Many other examples abound, of course. 6. FALSE. Let A = adj(AB ) = ab cd e g and let B = f h cf + dh −(af + bh) −(ce + dg ) ae + bg . Then AB = d −b −c a = ae + bg ce + dg h −g −f e af + bh cf + dh 10 10 10 the 10 10 10 10 . 10 10 (1, 1)-entry . We compute = adj(A)adj(B ). 7. TRUE. This can be immediately deduced by substituting In for A in Theorem 3.3.16. Problems: From this point on, CET(col#n) will mean that the Cofactor Expansion Theorem has been applied to column n of the determinant, and CET(row#n) will mean that the Cofactor Expansion Theorem has been applied to row n of the determinant. 1. Minors: M11 = 4, M21 = −3, M12 = 2, M22 = 1; Cofactors: C11 = 4, C21 = 3, C12 = −2, C22 = 1. 2. Minors: M11 = −9, M21 = −7, M31 = −2, M12 = 7, M22 = 1, M32 = −2, M13 = 5, M23 = 3, M33 = 2; Cofactors: C11 = −9, C21 = 7, C31 = −2, C12 = −7, C22 = 1, C32 = 2, C13 = 5, C23 = −3, C33 = 2. 3. Minors: M11 = −5, M21 = 47, M31 = 3, M12 = 0, M22 = −2, M32 = 0, M13 = 4, M23 = −38, M33 = −2; Cofactors: C11 = −5, C21 = −47, C31 = 3, C12 = 0, C22 = −2, C32 = 0, C13 = 4, C23 = 38, C33 = −2. 4. M12 = 3 7 5 1 4 1 2 6 2 = −4, M31 = 3 −1 2 4 12 0 12 = 16, M23 = 1 7 5 3 1 0 2 6 2 = 40, 1 −1 2 3 12 7 46 = 12, M42 = C12 = 4, C31 = 16, C23 = −40, C42 = 12. 5. 1 −2 1 3 6. −1 1 3 7. 2 7 1 1 −4 1 3 5 −2 = −7 · 8. 3 7 2 1 4 1 2 3 −5 =3· = 1 · |3| + 2 · |1| = 5. 2 3 4 −2 1 4 =3· 1 3 4 1 1 −4 5 −2 1 2 3 −5 +2· + − −1 3 2 1 2 −4 1 −2 1 4 3 −5 +4· 2 1 −3· +2· −1 1 1 1 1 5 4 2 2 4 = 3(−11) + 2(−7) + 4(−6) = −71. = −7 · 18 + 0 − 3 · 9 = −153. = 3(−11) − 7(−17) + 2(−2) = 82. 227 9. 10. 0 2 −3 −2 0 5 3 −5 0 = 3 · 10 + 5(−6) + 0 · 4 = 0. 1 −2 3 0 4 0 7 −2 0 1 3 4 1 5 −2 0 1 = −2 · 1 −2 3 0 7 = −2 · (−26) − 4 · (−5) = 72. −4· 4 1 5 −2 1 −2 3 0 1 3 1 5 −2 1. CET(col#4) 0 −2 1 −1 2 5 11. 1 3 7 12. −1 23 0 14 2 −1 3 13. 2 −1 3 5 21 3 −3 7 1 −1 2 5 1 = 1 = −1 · 1 =2· 2 −3 −2· 1 −1 1 7 4 3 3 7 1 2 +2· −1 −3 −5· = 7 − 2(−1) = 9. 2 1 3 4 3 7 1. CET(row#1) = −7 + 10 = 3. +3· −1 2 3 1 1. CET(col#1) = 2 · 17 − 5 · 2 + 3(−7) = 3. 1. CET(col#1) 14. 0 −2 1 2 0 −3 −1 3 0 1 =0· 0 −3 3 0 −2· −2 3 1 0 −1· −2 1 0 −3 = 0 + 6 − 6 = 0. 1. CET(col#1) 15. 1 0 −1 0 01 0 −1 −1 0 −1 0 01 0 1 1 =1· 1 0 −1 0 −1 0 1 0 1 0 −1 0 1 0 −1 1 0 1 −1· = −2 − 2 = −4. 1. CET(col#1) 16. 2 −1 31 1 4 −2 3 0 2 −1 0 1 3 −2 4 = 2 5 31 1 0 −2 3 0 0 −1 0 1 −1 −2 4 = 5 −1 1 1. CA32 (2) 1 4 +3· 2 51 2 03 =− 1 1 −1 4 2 5 1 −1 2. CET(row#3) = 21 − 21 = 0. 228 17. 3 5 2 6 2 3 5 −5 7 5 −3 −16 9 −6 27 −12 3 = 1 = −1 11 −27 −9 18 −93 −24 54 −111 1. A21 (−1) 1 2 −3 11 2 3 5 −5 7 5 −3 −16 9 −6 27 −12 −1 11 −27 0 −81 150 0 −210 537 4 = 1 2 −3 11 0 −1 11 −27 0 −9 18 −93 0 −24 54 −111 2 = −81 150 −210 537 5 =− 2. A12 (−2), A13 (−7), A14 (−9) 4. A12 (−9), A13 (−24) = 11997. 3. CET(col#1) 5. CET(col#1) 18. 2 −7 43 5 5 −3 7 6 2 63 4 2 −4 5 2 −7 43 1 3 12 6 2 63 4 2 −4 5 = 4 = 1. A42 (−1) = 13 −2 1 −16 0 −9 −10 −8 −3 1 = 2 5 0 −13 2 −1 1 3 1 2 0 −16 0 −9 0 −10 −8 −3 13 −2 1 101 −18 0 29 −14 0 2. A21 (−2), A23 (−6), A24 (−4) 5. A12 (9), A13 (3) 6 = −13 2 −1 3 0 −9 = − −16 −10 −8 −3 101 −18 29 14 3. CET(col#1) = −892. 4. P2 6. CET(col#3) 19. 2 0 −1 3 03 0 1 01 3 0 10 1 −1 30 2 0 0 2 4 0 5 1 = 0 −3 5 0 0 −9 1 −10 =− 1 30 4 0 −1 3 5 3 6 = 0 −3 5 3 0 1 1 3 0 0 1 −1 0 −1 3 −3 4 = − −9 −1 42 0 50 −9 1 −10 −26 0 −35 1. A41 (−2),A45 (−3) 5. P2 0 0 0 1 0 7 = 5 0 1 −10 3 5 42 50 −26 −35 2. CET(col#1) 6. A21 (−5), A23 (3) 0 2 4 0 5 0 −3 5 0 3 012 =− 1 304 0 −1 3 5 2 5 = −3 5 0 −9 1 −10 1 −3 −5 = −170. 3. A12 (−3) 4. CET(col#1) 7. CET(col#2) 20. 0 x y z −x 0 1 −1 −y −1 0 1 −z 1 −1 0 1 = xz −x −y − z −z 0 x+y z 0 1 −1 0 −1 1 1 −1 0 2 = xz x + y z −x 1 −1 −y − z −1 1 229 xz − yz + z 2 −x + y − z −y − z 3 = x+y+z 0 00 −1 1 1. A41 (−x), A43 (1) xz − yz + z 2 −x − y − z 4 = 2. CET(col#2) x+y+z 0 = (x + y + z )2 . 3. A32 (1), A31 (−z ) 4. CET(col#3) 21. (a): V (r1 , r2 , r3 ) = 2 = 1 r1 2 r1 1 r2 2 r2 r 2 − r1 2 2 r 2 − r1 1 r3 2 r3 1 = 1 r1 2 r1 r3 − r1 2 2 r3 − r1 0 r 2 − r1 2 2 r 2 − r1 0 r 3 − r1 2 2 r3 − r1 2 2 2 2 = (r2 − r1 )(r3 − r1 ) − (r3 − r1 )(r2 − r1 ) = (r3 − r1 )(r2 − r1 )[(r3 + r1 ) − (r2 + r1 )] = (r2 − r1 )(r3 − r1 )(r3 − r2 ). 1. CA12 (−1), CA13 (−1) 2. CET(row#1) (b): We use mathematical induction. The result is vacuously true when n = 1 and quickly veriﬁed when n = 2. Suppose that the result is true when n = k − 1, for k ≥ 3, and consider 1 r2 2 r2 . . . 1 r3 2 r3 . . . ··· ··· ··· 1 rk 2 rk . . . k r1 −1 V (r1 , r2 , . . . , rk ) = 1 r1 2 r1 . . . k r2 −1 k r3 −1 ··· k rk −1 . The determinant vanishes when rk = r1 , r2 , . . . , rk−1 , so we can write n−1 (rk − ri ), V (r1 , r2 , . . . , rk ) = a(r1 , r2 , . . . , rk ) i=1 k where a(r1 , r2 , . . . , rk ) is the coeﬃcient of rk −1 in the expansion of V (r1 , r2 , . . . , rk ). However, using the Cofactor Expansion Theorem along column k , we see that this coeﬃcient is just V (r1 , r2 , . . . , rk−1 ), so by hypothesis, (rm − ri ). a(r1 , r2 , . . . , rk ) = V (r1 , r2 , . . . , rk−1 ) = 1≤i<m≤n−1 Thus, n−1 (rm − ri ) V (r1 , r2 , . . . , rk ) = 1≤i<m≤n−1 (rk − ri ) = i=1 (rm − ri ). 1≤i<m≤n Hence the result is true for n = k , so, by induction, is true for all non-negative integers n. 22. (a): det(A) = 11; 230 5 −4 −1 3 (b): MC = 5 −1 −4 3 (c): adj(A) = (d): A−1 = ; 1 11 ; 5 −1 −4 3 . 23. (a): det(A) = 7; 1 −4 2 −1 (b): MC = ; (c): adj(A) = 1 2 −4 −1 ; 1 7 1 2 −4 −1 . (d): A−1 = 24. (a): det(A) = 0; −6 −2 (b): MC = (c): adj(A) = 15 5 ; −6 −2 15 5 ; (d): A−1 does not exist because det(A) = 0. 25. (a): det(A) = 2 −3 0 2 15 0 −1 2 = 2 −3 0 0 45 0 −1 2 = 2 · 13 = 26; 7 −4 −2 6 4 2 ; (b): MC = −15 −10 8 7 6 −15 (c): adj(A) = −4 4 −10 ; −2 2 8 7 1 1 −4 (d): A−1 = adj(A) = det(A) 26 −2 26. (a): det(A) = −2 2 0 3 −1 1 5 2 3 = −2 0 0 6 −15 4 −10 . 2 8 3 −1 4 4 2 3 = −2 · 4 = −8; 231 −7 −6 4 4 ; (b): MC = −11 −6 16 8 −8 −7 −11 16 8 ; (c): adj(A) = −6 −6 4 4 −8 −7 −11 16 1 1 8 . adj(A) = − −6 −6 (d): A−1 = det(A) 8 4 4 −8 27. (a): det(A) = 1 −1 2 3 −1 4 5 17 = 6 8 5 0 0 1 9 11 7 = 6; −11 −1 8 9 −3 −6 ; (b): MC = −2 2 2 −11 9 −2 2 ; (c): adj(A) = −1 −3 8 −6 2 −11/6 3/2 −1/3 −11 9 −2 1 1 1/3 . −1 −3 2 = −1/6 −1/2 (d): A−1 = adj(A) = det(A) 6 4/3 −1 1/3 8 −6 2 28. 0 12 −1 −1 3 = 1 −2 1 5 43 (b): MC = −5 −2 1 ; 5 −2 1 5 −5 5 (c): adj(A) = 4 −2 −2 ; 3 1 1 1 1 (d): A−1 = adj(A) = det(A) 10 (a): det(A) = 29. (a): det(A) = 2 −3 5 1 2 1 0 7 −1 = −9 1 7 (b): MC = 32 −2 −14 ; −13 3 7 0 12 0 −3 4 1 −2 1 = 10; 5 −5 5 4 −2 −2 . 3 1 1 0 −7 3 1 2 1 0 7 −1 = 14; 232 −9 32 −13 3 ; (c): adj(A) = 1 −2 7 −14 7 (d): A−1 −9 32 −13 −9/14 16/7 −13/14 1 1 1 −2 3 = 1/14 −1/7 3/14 . adj(A) = = det(A) 14 7 −14 7 1/2 −1 1/2 30. 11 1 1 −1 1 −1 1 1 1 −1 −1 −1 1 1 −1 (a): det(A) = = 11 1 1 02 0 2 0 2 −2 0 02 0 −2 = 2 0 2 2 −2 0 2 0 −2 = 16; 44 4 4 −4 4 −4 4 (b): MC = 4 4 −4 −4 ; −4 4 4 −4 4 −4 4 −4 4 4 4 4 ; (c): adj(A) = 4 −4 −4 4 4 4 −4 −4 4 −4 4 −4 1 14 4 4 4 . = adj(A) = 4 −4 −4 4 det(A) 16 4 4 −4 −4 (d): A−1 31. 1 −2 3 2 (a): det(A) = 0 1 9 0 3 5 1 3 0 2 3 −1 = 11 0 18 5 4 1 10 3 796 2 0 0 0 −1 84 −46 −29 81 −162 60 99 −27 ; (b): MC = 18 38 −11 3 −30 26 130 −72 84 −162 18 −30 −46 60 38 26 ; (c): adj(A) = −29 99 −11 130 81 −27 3 −72 =− 11 4 7 0 18 1 10 96 =− 11 0 18 41 10 −29 0 −84 = 402; 233 (d): A−1 84 −162 18 −30 1 1 −46 60 38 26 = adj(A) = −29 99 −11 130 det(A) 402 81 −27 3 −72 14/67 −27/67 3/67 −5/67 −23/201 10/67 19/201 13/201 = −29/402 33/134 −11/402 65/201 27/134 −9/134 1/134 −12/67 32. (a): Using the Cofactor Expansion Theorem along row 1 yields det(A) = (1 + 2x2 ) + 2x(2x + 4x3 ) + 2x2 (2x2 + 4x4 ) = 8x6 + 12x4 + 6x2 + 1 = (1 + 2x2 )3 . (b): 1 + 2x2 −(2x + 4x3 ) 2x2 + 4x4 1 − 4x4 −(2x + 4x3 ) MC = 2x + 4x3 2 4 3 2x + 4x 2x + 4x 1 + 2x2 1 + 2x2 −2x(1 + 2x2 ) 2x2 (1 + 2x2 ) = 2x(1 + 2x2 ) (1 + 2x2 )(1 − 2x2 ) −2x(1 + 2x2 ) , 2x2 (1 + 2x2 ) 2x(1 + 2x2 ) 1 + 2x2 so that A−1 33. det(A) = 1 1 1 1 2 2 1 2 3 1 2x 1 1 −2x 1 − 2x2 = adj(A) = det(A) (1 + 2x2 )2 2x2 −2x = 1 0 0 1 1 1 1 1 2 = 1 1 1 2 2x2 2x . 1 = 2 − 1 = 1, and C23 = − 1 1 1 2 = −(2 − 1) = −1. Thus, (A−1 )32 = 34. det(A) = 2 0 −1 2 1 1 3 −1 0 = 4 10 2 11 3 −1 0 (adj(A))32 C23 = = −1. det(A) det(A) =− 4 1 3 −1 = −(−4 − 3) = 7, and C13 = 2 1 3 −1 = −2 − 3 = −5. . 234 Thus, (A−1 )31 = (adj(A))31 C13 5 = =− . det(A) det(A) 7 35. = det(A) = 1 0 00 2 −1 −1 3 0 1 −1 2 −1 1 30 = 1 0 10 2 −1 13 0 1 −1 2 −1 1 20 −1 23 1 −4 2 1 00 = 2 −4 = −1 −1 3 1 −1 2 1 30 3 2 = 4 + 12 = 16, and 1 2 −1 C32 = − 1 1 2 0 3 0 = −(−3) 1 −1 1 2 = 3(2 + 1) = 9. Thus, (A−1 )23 = 2e2t −e2t 36. MC = −2et 3et 2e2t −e2t , and det(A) = 4e3t , so −2et 3et 1 1 2e2t −e2t adj(A) = 3t . = −2et 3et det(A) 4e , adj(A) = A−1 e−t sin(2t) −et cos(2t) −et cos(2t) et sin(2t) 37. MC = (adj(A))23 C32 9 = = . det(A) det(A) 16 e−t sin(2t) et cos(2t) −et cos(2t) et sin(2t) , adj(A) = , and det(A) = sin2 (2t) + cos2 (2t) = 1, so 1 e−t sin(2t) et cos(2t) adj(A) = . −et cos(2t) et sin(2t) det(A) −e−t −te2t 3te−t −te−t −te−t e−t 0 , adj(A) = −e−t e−t 0 , and 2t 2t 2t 0 te −te 0 te A−1 = 3te−t −te−t 38. MC = −te−t det(A) = et et et 2tet 2tet tet e−2t e−2t 2e−2t = et 0 0 tet tet 0 e−2t 0 e−2t = et (te−t ) = t, so A−1 3te−t 1 1 −e−t = adj(A) = det(A) t −te2t −te−t −te−t e−t 0 . 2t 0 te 235 −1 2 −1 −1 3 −2 3 , adj(A) = 2 −6 4 . Hence, 39. MC = 3 −6 −2 4 −2 −1 3 −2 1 A · adj(A) = 3 4 2 4 5 3 −1 3 −2 0 5 2 −6 4 = 0 6 −1 3 −2 0 0 0 0 0 0 = 03 . 0 From Equation (3.3.4) of the text we have that, in general, A · adj(A) = det(A) · In . Since, for the given matrix, A · adj(A) = 03 , we must have det(A) = 0. 40. det(A) = 2 −3 1 2 = 7, det(B1 ) = x1 = 2 −3 4 2 = 16, and det(B2 ) = 16 det(B1 ) = and det(A) 7 x2 = 2 1 2 4 = 6. Thus, det(B2 ) 6 =. det(A) 7 Solution: (16/7, 6/7). 41. det(A) = 3 −2 1 1 1 −1 1 0 1 = 6, det(B1 ) = 4 −2 1 2 1 −1 1 0 1 = 9, det(B2 ) = 34 1 1 2 −1 11 1 = 0, det(B3 ) = 3 −2 4 1 12 1 01 = −3. 9 det(B2 ) det(B3 ) 3 det(B1 ) = , x2 = = 0, and x3 = =− . det(A) 6 det(A) det(A) 6 Solution: (3/2, 0, −1/2). Thus, x1 = 1 −3 1 1 −3 1 1 4 −1 = 0 7 −2 = 2 1 −3 0 7 −5 Thus, the system has only the trivial solution. 42. det(A) = 7 −2 7 −5 = −35 + 14 = −21 = 0. 236 43. −5 −2 3 −1 0 0 1 0 1 1 0 −1 4 1 −2 1 1 −2 3 −1 2 0 1 0 1 1 0 −1 0 1 −2 1 det(A) = = −5 −2 −1 1 1 −1 4 1 1 =− =− det(B1 ) = 1 −2 3 −1 2 0 1 0 0 1 0 −1 3 1 −2 1 det(B2 ) = 11 3 −1 22 1 0 10 0 −1 0 3 −2 1 det(B3 ) = 1 −2 1 −1 2 02 0 1 1 0 −1 0 13 1 = −2 −1 −1 0 5 20 4 11 1 −3 3 −1 2 0 1 0 0 0 0 −1 3 2 −2 1 = = 1 2 1 0 1 30 2 12 0 00 3 −2 1 = = −3 −1 0 20 = −2 −2 −3 11 1 −3 3 2 0 1 3 2 −2 1 30 2 12 3 −2 1 = −5 −3 3 0 0 1 7 2 −2 = −3 3 −5 01 0 32 7 =− = 1 30 −4 50 3 −2 1 = 17, = −2(−8) = 16, 3 −2 31 2 0 12 0 1 00 −1 1 −2 3 1 −2 31 2 0 12 1 1 00 0 1 −2 3 =− = 0 −2 1 −1 0 02 0 1 1 0 −1 −3 13 1 0 −2 −1 1 1 −1 −3 1 1 det(B4 ) = = −3, −3 −5 3 7 =− 3 31 2 12 −1 −2 3 = 6. Therefore, x1 = 17 16 11 , x2 = − , x3 = − , and x4 = −2. 3 3 3 Solution: (11/3, −17/3, −16/3, −2). 44. det(A) = et et e−2t −2e−2t det(B1 ) = 3 sin t 4 cos t det(B2 ) = et et e−2t −2e−2t 3 sin t 4 cos t 1 1 1 −2 = et e−2t = et = e−2t 1 1 3 sin t 4 cos t 3 sin t 4 cos t = −3e−t , 1 −2 = −2e−2t (3 sin t + 2 cos t), = et (4 cos t − 3 sin t). = −11, 237 Thus, x1 = Solution: det(B2 ) det(B1 ) 2(3 sin t + 2 cos t) e2t (3 sin t − 4 cos t) and x2 = = = . t det(A) 3e det(A) 3 1 2 −t e (3 sin t + 2 cos t), e2t (3 sin t − 4 cos t) . 3 3 45. det(A) = 1 4 −2 1 2 9 −3 −2 15 0 −1 3 14 7 −2 det(B2 ) = 1 2 −2 1 2 5 −3 −2 13 0 −1 36 7 −2 = 5 −16 −1 −3 1 −1 −2 2 2 −1 −3 0 1 0 00 3 −1 71 = = 1 −1 −2 2 2 −1 −3 0 1 0 00 3 −3 71 = −31. Therefore x2 = = = −1 −2 2 −1 −3 0 −1 71 −1 −2 2 −1 −3 0 −3 71 1 −16 0 −1 −3 0 −1 71 = = = −19, 5 −16 0 −1 −3 0 −3 71 det(B2 ) 31 = . det(A) 19 46. det(A) = b+c b c = −c det(B1 ) = a a a+c b c a+b −a + b + c −a + b − c a a a b a+c b c c a+b = (a − b + c) det(B2 ) = b+c b c det(B3 ) = b+c b c b c a a a+c b c c = (−a + b + c) −a + b + c −a + b − c 0 + (a + b) = a a c a+b a a b b c a+b = −(a − b − c) a b = = b a+b = a+c b c c a a a+c b c a+b −a + b + c −a + b − c a a+c = 4abc, a 0 a b a−b+c b c 0 a+b = a(a − b + c)(a + b − c), b+c b c a−b−c a 0 b 0 a+b = −b(a − b − c)(a + b − c), −a + b + c a a 0 a+c b 0 c c = −c(a − b − c)(a − b + c). Case 1: If a = 0, b = 0, and c = 0, then det(A) = 0, so it follows by Theorem 3.2.4 that the given system of equations has a unique solution. The solution in this case is given by: 238 det(B1 ) (a − b + c)(a + b − c) = , det(A) 4bc det(B2 ) −(a − b − c)(a + b − c) x2 = = , det(A) 4ac det(B3 ) −(a − b − c)(a − b + c) x3 = = . det(A) 4ab x1 = Case 2: If a = b = 0 and c = 0, then it is easy to see from the system that the system is inconsistent. By the symmetry of the equations, the same will be true if a = c = 0 with b = 0, or b = c = 0 with a = 0. Case 3: Suppose a = 0 where matrix: b+c 0 c b b c b = 0 and c = 0, and consider the reduced row echelon form of the augmented 0 1 0 0 1000 00 b b ∼ 0 b − c b − c b − c ∼ 0 1 1 1 . cc 0 c−b c−b c−b 0000 From the last matrix we see that the system has an inﬁnite number of solutions of the form {(0, 1 − r, r) : r ∈ R}. By the symmetry of the three equations, it follows that the other two cases: b = 0 with a = 0 and c = 0, and c = 0 with a = 0 and b = 0 have similar forms of solutions. 47. Let B be the matrix obtained from A by adding column i to column j (i = j ) in the matrix A. By the property for columns corresponding to Property P3, we have det(B ) = det(A). Cofactor expansion of B along column j gives n n k=1 n aki Ckj . akj Ckj + (akj + aki )Ckj = det(A) = det(B ) = k=1 k=1 That is, n det(A) = det(A) + aki Ckj , k=1 since by the Cofactor Expansion Theorem the ﬁrst summation on the right-hand side is simply det(A). It follows immediately that n aki Ckj = 0, i = j. k=1 51. 1.21 3.42 2.15 3.25 3.42 2.15 A = 5.41 2.32 7.15 , B1 = 4.61 2.32 7.15 , 21.63 3.51 9.22 9.93 3.51 9.22 1.21 3.25 2.15 1.21 3.42 3.25 B2 = 5.41 4.61 7.15 , B3 = 5.41 2.32 4.61 . 21.63 9.93 9.22 21.63 3.51 9.93 From Cramer’s Rule, x1 = det(B1 ) det(B2 ) det(B3 ) ≈ 0.25, x2 = ≈ 0.72, x3 = ≈ 0.22. det(A) det(A) det(A) 239 52. det(A) = 32, det(B1 ) = −3218, det(B2 ) = 3207, det(B3 ) = 2896, det(B4 ) = −9682, det(B5 ) = 2414. So, x1 = − 1609 3207 181 4841 1207 , x2 = , x3 = , x4 = − , x5 = . 16 32 2 32 16 53. We have n (BA)ji = n bjk aki = k=1 k=1 1 1 · adj(A)jk · aki = det(A) det(A) n Ckj aki = δij , k=1 where we have used Equation (3.3.4) in the last step. Solutions to Section 3.4 Problems: 1. 5 −1 3 7 2. 3 −1 6 5 7 2 4 3 −2 3. 5 6 14 14 13 27 = 5 · 7 − 3(−1) = 38. = −43. = −3. 4. 2.3 1.5 7.9 4.2 3.3 5.1 6.8 3.6 5.7 = 2.3 3.3 3.6 5.1 5.7 − 1.5 4.2 6.8 5.1 5.7 + 7.9 4.2 3.3 6.8 3.6 = 1.035 + 16.11 − 57.828 = −40.683. 5. abc bca cab =a ca ab −b ba cb +c b c c a = a(bc − a2 ) − b(b2 − ac) + c(ab − c2 ) = 3abc − a3 − b3 − c3 . 6. 3 5 −1 2 2 1 52 3 2 57 1 −1 21 = 0 8 −7 −1 0 3 1 0 0 5 −1 4 1 −1 2 1 = −1 29 0 8 −7 −1 1 0 −1 4 8 −7 −1 1 0 = −1 3 5 −1 4 = −1 29 −1 8 4 = −124. 240 7. 7 12 2 −2 4 3 −1 5 18 9 27 3 6 4 54 =9 7 123 2 −2 4 6 3 −1 5 4 2 136 = −36 14 −3 45 −14 = 36 8. det(A) = 11. MC = 7 −2 −5 3 423 10 7 7 −5 1 3 = −36 3 12 7 3 = −9 14 0 −3 45 0 −14 −5 1 3 = −2196. =⇒ adj(A) = A−1 = 712 16 0 8 =9 10 0 7 −5 0 1 7 −5 −2 3 1 adj(A) = det(A) 7/11 −2/11 , so that −5/11 3/11 . 1 2 3 123 2 3 1 = 0 −1 −5 = −18. 0 −5 −7 312 5 −1 −7 5 −1 −7 5 , so that 5 =⇒ adj(A) = −1 −7 MC = −1 −7 −7 5 −1 −7 5 −1 −5/18 1/18 7/18 1 7/18 −5/18 . A−1 = adj(A) = 1/18 det(A) 7/18 −5/18 1/18 9. det(A) = −11 −38 7 4 7 −11 −38 0 0 1 =− 6 1= = 30. 5 20 5 20 −1 14 −1 −20 102 −38 −20 5 10 5 −24 11 , so that MC = 102 −24 −30 =⇒ adj(A) = 10 −30 10 −38 11 10 −2/3 17/5 −19/15 1 11/30 . A−1 = adj(A) = 1/6 −4/5 det(A) 1/3 −1 1/3 10. det(A) = 3 2 3 2 57 4 −3 2 = 116. 6 9 11 −51 −32 54 −51 8 31 8 −20 12 =⇒ adj(A) = −32 −20 24 , so that MC = 31 24 −26 54 12 −26 −51/116 2/29 31/116 1 6/29 . adj(A) = −8/29 −5/29 A−1 = det(A) 27/58 3/29 −13/58 11. det(A) = 16 8 12 10 7 7 −5 1 3 241 12. det(A) = −38 0 MC = 38 0 5 −1 2 3 −1 4 1 −1 2 5 9 −3 32 34 28 44 −124 −222 −24 −16 A−1 3 6 13. A = 5 2 5 5 1 2 = −152. −38 0 38 0 2 28 −124 −24 −60 , so that =⇒ adj(A) = 32 34 44 −222 −16 130 2 −60 130 8 8 1/4 0 −1/4 0 −4/19 −7/38 1 31/38 3/19 . = adj(A) = −17/76 −11/38 111/76 2/19 det(A) −1/76 15/38 −65/76 −1/19 , B1 = 4 9 5 2 x1 = cos t sin t 14. A = sin t − cos t x1 = , B1 = , so that 37 det(B2 ) 1 det(B1 ) = , x2 = =− . det(A) 24 det(A) 8 e−t 3e−t 5 1 4 13 15. A = 2 −1 5 , B1 = 7 −1 2 3 2 31 det(B1 ) 1 =, x1 = det(A) 4 5 16. A = 2 2 4 9 sin t − cos t , B2 = cos t sin t e−t 3e−t , so that det(B2 ) det(B1 ) = e−t [cos t + 3 sin t], x2 = = e−t [sin t − 3 cos t]. det(A) det(A) 3 6 , B2 = 45 3 5 , B2 = 2 7 22 1 det(B2 ) 1 x2 = = , det(A) 16 4 15 3 5 , B3 = 2 −1 7 , so that 2 32 1 det(B3 ) 21 x3 = = . det(A) 16 3 6 33 6 5 3 6 53 3 4 −7 , B1 = −1 4 −7 , B2 = 2 −1 −7 , B3 = 2 4 −1 , so that 5 9 45 9 2 4 9 25 4 det(B1 ) 30 det(B2 ) 59 det(B3 ) 81 x1 = = , x2 = = , x3 = = . det(A) 271 det(A) 271 det(A) 271 3.1 3.5 7.1 3.6 3.5 7.1 3.1 3.6 7.1 3.1 3.5 3.6 17. A = 2.2 5.2 6.3 , B1 = 2.5 5.2 6.3 , B2 = 2.2 2.5 6.3 , B3 = 2.2 5.2 2.5 , 1.4 8.1 0.9 9.3 8.1 0.9 1.4 9.3 0.9 1.4 8.1 9.3 so that x1 = det(B2 ) det(B3 ) det(B1 ) = 3.77, x2 = = 0.66, x3 = = −1.46. det(A) det(A) det(A) 18. Since A is invertible, A−1 exists. Then, AA−1 = In =⇒ det(AA−1 ) = det(In ) = 1 =⇒ det(A) det(A−1 ) = 1, so that det(A−1 ) = 1 . det(A) 242 19. det(2A) = 23 det(A) = 24. 1 1 =. det(A) 3 From the result of the preceding problem, det(A−1 ) = det(AT B ) = det(AT ) det(B ) = det(A) det(B ) = −12. det(B 5 ) = [det(B )]5 = −1024. 1 det(B −1 AB ) = det(B −1 ) det(A) det(B ) = det(A) det(B ) = det(A) = 3. det(B ) Solutions to Section 3.5 Additional Problems: 1. (a): We have det(A) = (−7)(−5) − (1)(−2) = 37. (b): We have 1 −5 −7 −2 1 A∼ 1. P12 1 −5 0 −37 2 ∼ = B. 2. A12 (7) Now, det(A) = −det(B ) = −(−37) = 37. (c): Let us use the Cofactor Expansion Theorem along the ﬁrst row: det(A) = a11 C11 + a12 C12 = (−7)(−5) + (−2)(−1) = 37. 2. (a): We have det(A) = (6)(1) − (−2)(6) = 18. (b): We have 1 A∼ 1 −2 1 1 2 ∼ 1 0 1 3 = B. 1. M1 (1/6) 2. A12 (2) Now, det(A) = 6 det(B ) = 6 · 3 = 18. (c): Let us use the Cofactor Expansion Theorem along the ﬁrst row: det(A) = a11 C11 + a12 C12 = (6)(1) + (6)(2) = 18. 3. (a): Using Equation (3.1.2), we have det(A) = (−1)(2)(−3)+(4)(2)(2)+(1)(0)(2)−(−1)(2)(2)−(4)(0)(−3)−(1)(2)(2) = 6+16+0−(−4)−0−4 = 22. (b): We have −1 A∼ 0 0 1 4 1 −1 2 2 2 ∼ 0 10 −1 0 4 1 2 2 = B. 0 −11 243 1. A13 (2) 2. A23 (−5) Now, det(A) = det(B ) = (−1)(2)(−11) = 22. (c): Let us use the Cofactor Expansion Theorem along the ﬁrst column: det(A) = a11 C11 + a21 C21 + a31 C31 = (−1)(−10) + (0)(14) + (2)(6) = 22. 4. (a): Using Equation (3.1.2), we have det(A) = (2)(0)(3)+(3)(2)(6)+(−5)(−4)(−3)−(−6)(0)(−5)−(−3)(2)(2)−(3)(−4)(3) = 0+36−60+−0+12+36 = 24. (b): We have 2 3 −5 2 1 2 6 −8 ∼ 0 A∼ 0 0 −12 18 0 1. A12 (2), A13 (−3) 3 −5 6 −8 = B. 0 2 2. A23 (2) Now, det(A) = det(B ) = (2)(6)(2) = 24. (c): Let us use the Cofactor Expansion Theorem along the second row: det(A) = (−4)(6) + (0)(36) + (2)(24) = −24 + 0 + 48 = 24. 5. (a): Of the 24 terms appearing in the determinant expression (3.1.3), only terms containing the factors a11 and a44 will be nonzero (all other entries in the ﬁrst column and fourth row of A are zero). Looking at entries in the second and third rows and columns of A, we see that only the product a23 a32 is nonzero. Therefore, the only nonzero term in the summation (3.1.3) is a11 a23 a32 a44 = (3)(1)(2)(−4) = −24. The permutation associated with this term is (1, 3, 2, 4) which contains one inversion. Therefore, σ (1, 3, 2, 4) = −1, and so the determinant is (−24)(−1) = 24. (b): We have 3 −1 −2 1 2 1 −1 10 = B. A∼ 0 0 1 4 0 0 0 −4 1. P23 Now, det(A) = −det(B ) = −(3)(2)(1)(−4) = 24. 01 4 (c): Cofactor expansion along the ﬁrst column yields: det(A) = 3 · 2 1 −1 . This latter determinant can 0 0 −4 be found by cofactor expansion along the last column: (−4)[(0)(1) − (2)(1)] = 8. Thus, det(A) = 3 · 8 = 24. 6. 244 (a): Of the 24 terms in the determinant expression (3.1.3), the only nonzero term is a41 a32 a23 a14 , which has a positive sign: σ (4, 3, 2, 1) = +1. Therefore, det(A) = (−3)(1)(−5)(−2) = −30. (b): By permuting the ﬁrst and last rows, and by permuting the middle two rows, we bring A to upper triangular form. The two permutations introduce two sign changes, so the resulting upper triangular matrix has the same determinant as A, and it is (−3)(1)(−5)(−2) = −30. 0 0 −2 0 −5 1 . This latter 1 −4 1 determinant can be found by cofactor expansion along the ﬁrst column: (1)[(0)(1) − (−5)(−2)] = −10. Hence, det(A) = 3 · (−10) = −30. (c): Cofactor expansion along the ﬁrst column yields: det(A) = −(−3) · 7. To obtain the given matrix from A, we perform two row permutations, multiply a row through by −4, and multiply a row through by 2. The combined eﬀect of these operations is to multiply the determinant of A by (−1)2 · (−4) · (2) = −8. Hence, the given matrix has determinant det(A) · (−8) = 4 · (−8) = −32. 8. To obtain the given matrix from A, we add 5 times the middle row to the top row, we multiply the last row by 3, we multiply the middle row by −1, we add a multiple of the last row to the middle row, and we perform a row permutation. The combined eﬀect of these operations is to multiply the determinant of A by (1) · (3) · (−1) · (−1) = 3. Hence, the given matrix has determinant det(A) · (3) = 4 · (3) = 12. 9. To obtain the given matrix from AT , we do two row permutations, multiply a row by −1, multiply a row by 3, and add 2 times one row to another. The combined eﬀect of these operations is to multiply the determinant of A by (−1)(−1)(−1)(3)(1) = −3. Hence, the given matrix has determinant det(A) · (−3) = (4) · (−3) = −12. 10. To obtain the given matrix from A, we permute the bottom two rows, multiply a row by 2, multiply a row by −1, add −1 times one row to another, and then multiply each row of the matrix by 3. The combined eﬀect of these operations is to multiply the determinant of A by (−1)(2)(−1)(1)(3)3 = 54. Hence, the given matrix has determinant det(A) · 54 = (4)(54) = 216. 11. We have det(AB ) = det(A)det(B ) = (−2) · 3 = −6. 12. We have det(B 2 A−1 ) = (det(B ))2 det(A−1 ) = 32 −2 = −4.5. 1 13. We have det((A−1 B )T (2B −1 )) = det(A−1 B ) · det(2B −1 ) = − 2 · 3 · 24 · 1 3 = −8. 14. We have det((−A)3 (2B 2 )) = det(−A)3 det(2B 2 ) = (−1)3 (detA)3 · 24 · (detB )2 = (−1)3 (−2)3 · 24 · 32 = 1152. 15. Since A and B are not square matrices, det(A) and det(B ) are not possible to compute. We have 8 −10 det(C ) = −18, and therefore, det(C T ) = −18. Now, AB = , and so det(AB ) = 474. Since 25 28 45 2 BA = 1 8 −13 , we have det(BA) = 0. Next, det(B T AT ) = det((AB )T ) = det(AB ) = 474. Next, 18 15 24 38 54 det(BAC ) = det(BA)det(C ) = 0 · (−18) = 0. Finally, we have det(ACB ) = det = 4104. 133 297 16. The matrix of cofactors of B is MC = 1 −1 −4 5 , and so adj(B ) = 1 −4 −1 5 . Since det(B ) = 1, 245 we have B −1 = 1 −4 −1 5 . Therefore, (A−1 B T )−1 = (B T )−1 A = (B −1 )T A = 17. 16 −1 −5 5 −3 ; MC = 4 −4 2 10 16 4 −4 5 2 adj(A) = −1 −5 −3 10 det(A) 28; = 4/7 1/7 A−1 = −1/28 5/28 −5/28 −3/28 ; −1/7 1/14 . 5/14 18. 5 12 −11 −1 65 −24 −23 −13 ; MC = −25 0 −5 5 −35 0 5 −5 5 65 −25 −35 12 −24 0 0 ; adj(A) = −11 −23 −5 5 −1 −13 5 −5 det(A) = −60; 5 65 −25 −35 1 12 −24 0 0 . A−1 = − −11 −23 −5 5 60 −1 −13 5 −5 19. 88 −24 −40 −48 32 12 −20 0 ; MC = 16 12 −4 0 −4 6 −2 0 88 32 16 −4 −24 12 12 6 adj(A) = −40 −20 −4 −2 ; −48 0 0 0 det(A) = −48; 88 32 16 −4 1 −24 12 12 6 . A−1 = − 48 −40 −20 −4 −2 −48 0 0 0 20. 1 −1 −4 5 1 3 2 4 = −2 −2 11 12 . 246 21 MC = 24 48 12 9 24 21 adj(A) = 12 −12 det(A) 9; = 7/3 A−1 = 4/3 −4/3 21. −12 −12 ; −27 24 48 9 24 ; −12 −27 8/3 1 −4/3 16/3 8/3 . −3 7 −2 0 2 −2 ; MC = 0 −6 0 2 7 0 −6 2 0 ; adj(A) = −2 0 −2 2 det(A) 2; = 3.5 0 −3 1 0 . A−1 = −1 0 −1 1 4 −1 0 1 4 . 22. There are many ways to do this. One choice is to let B = 5 0 0 10 9 123 23. FALSE. For instance, if one entry of the matrix A = 1 2 3 is changed, two of the rows of A 123 will still be identical, and therefore, the determinant of the resulting matrix must be zero. It is not possible to force the determinant to equal r. 24. Note that det(A) = 2 + 12k + 36 − 4k − 18 − 12 = 8k + 8 = 8(k + 1). (a): Based on the calculation above, we see that A fails to be invertible if and only if k = −1. (b): The volume of the parallelepiped determined by the row vectors of A is precisely |det(A)| = |8k + 8|. The volume of the parallelepiped determined by the column vectors of A is the same as the volume of the parallelepiped determined by the row vectors of AT , which is |det(AT )| = |det(A)| = |8k + 8|. Thus, the volume is the same. 25. Note that det(A) = 3(k + 1) + 2k + 0 − 3 − k (k + 1) − 0 = −k 2 + 4k = k (4 − k ). (a): Based on the calculation above, we see that A fails to be invertible if and only if k = 0 or k = 4. (b): The volume of the parallelepiped determined by the row vectors of A is precisely |det(A)| = | − k 2 + 4k |. The volume of the parallelepiped determined by the column vectors of A is the same as the volume of the parallelepiped determined by the row vectors of AT , which is |det(AT )| = |det(A)| = | − k 2 + 4k |. Thus, the volume is the same. 247 26. Note that det(A) = 0 + 4(k − 3) + 2k 3 − k 2 − 8k − 0 = 2k 3 − k 2 − 4k − 12. (a): Based on the calculation above, we see that A fails to be invertible if and only if 2k 3 − k 2 − 4k − 12 = 0, and the only real solution to this equation for k is k ≈ 2.39 (from calculator). (b): The volume of the parallelepiped determined by the row vectors of A is precisely |det(A)| = |2k 3 − k 2 − 4k − 12|. The volume of the parallelepiped determined by the column vectors of A is the same as the volume of the parallelepiped determined by the row vectors of AT , which is |det(AT )| = |det(A)| = |2k 3 − k 2 − 4k − 12|. Thus, the volume is the same. 27. From the assumption that AB = −BA, we can take the determinant of each side: det(AB ) = det(−BA). Hence, det(A)det(B ) = (−1)n det(B )det(A) = −det(A)det(B ). From this, it follows that det(A)det(B ) = 0, and therefore, either det(A) = 0 or det(B ) = 0. Thus, either A or B (or both) fails to be invertible. 28. Since AAT = In , we can take the determinant of both sides to get det(AAT ) = det(In ) = 1. Hence, det(A)det(AT ) = 1. Therefore, we have (det(A))2 = 1. We conclude that det(A) = ±1. −3 1 29. The coeﬃcient matrix of this linear system is A = det(A) = −3 1 1 2 = −7, det(B1 ) = 3 1 1 2 1 2 . We have = 5, and det(B2 ) = −3 1 3 1 Thus, x1 = det(B1 ) 5 det(B2 ) 6 =− and x2 = =. det(A) 7 det(A) 7 Solution: (−5/7, 6/7). 2 −1 1 5 3 . We have 30. The coeﬃcient matrix of this linear system is A = 4 4 −3 3 det(A) = det(B2 ) = 2 −1 1 4 53 4 −3 3 2 4 4 2 0 2 1 3 3 = 16, det(B1 ) = = −4, det(B3 ) = 2 −1 1 0 53 2 −3 3 2 −1 2 4 50 4 −3 2 = 32, = −36. Thus, x1 = det(B2 ) 1 det(B3 ) 9 det(B1 ) = 2, x2 = = − , and x3 = =− . det(A) det(A) 4 det(A) 4 Solution: (2, −1/4, −9/4). 3 12 31. The coeﬃcient matrix of this linear system is A = 2 −1 1 . We have 0 55 det(A) = 3 12 2 −1 1 0 55 = −20, det(B1 ) = −1 12 −1 −1 1 −5 55 = −10, = −6. 248 det(B2 ) = 3 −1 2 2 −1 1 0 −5 5 = −10, det(B3 ) = 3 1 −1 2 −1 −1 0 5 −5 = 30. Thus, x1 = det(B1 ) 1 det(B2 ) 1 det(B3 ) 3 = , x2 = = , and x3 = =− . det(A) 2 det(A) 2 det(A) 2 Solution: (1/2, 1/2, −3/2). Solutions to Section 4.1 True-False Review: 1. FALSE. The vectors (x, y ) and (x, y, 0) do not belong to the same set, so they are not even comparable, let alone equal to one another. 2. TRUE. The unique additive inverse of (x, y, z ) is (−x, −y, −z ). 3. TRUE. The solution set refers to collections of the unknowns that solve the linear system. Since this system has 6 unknowns, the solution set will consist of vectors belonging to R6 . 4. TRUE. The vector (−1) · (x1 , x2 , . . . , xn ) is precisely (−x1 , −x2 , . . . , −xn ), and this is the additive inverse of (x1 , x2 , . . . , xn ). 5. FALSE. There is no such name for a vector whose components are all positive. 6. FALSE. The correct result is (s + t)(x + y) = (s + t)x + (s + t)y = sx + tx + sy + ty, and in this item, only the ﬁrst and last of these four terms appear. 7. TRUE. When the vector x is scalar multiplied by zero, each component becomes zero: 0x = 0. This is the zero vector in Rn . 8. TRUE. This is seen geometrically from addition and subtraction of geometric vectors. 9. FALSE. If k < 0, then k x is a vector in the third quadrant. For instance, (1, 1) lies in the ﬁrst quadrant, but (−2)(1, 1) = (−2, −2) lies in the third quadrant. 10. TRUE. Recalling that i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1), we have √ √ √ 5i − 6j + 2k = 5(1, 0, 0) − 6(0, 1, 0) + 2(0, 0, 1) = (5, −6, 2), as stated. 11. FALSE. If the three vectors lie on the same line or the same plane, the resulting object may determine a one-dimensional segment or two-dimensional area. For instance, if x = y = z = (1, 0, 0), then the vectors x,y, and z rest on the segment from (0, 0, 0) to (1, 0, 0), and do not determine a three-dimensional solid region. 12. FALSE. The components of k x only remain even integers if k is an integer. But, for example, if k = π , then the components of k x are not even integers at all, let alone even integers. 249 Problems: 1. v1 = (6, 2), v2 = (−3, 6), v3 = (6, 2) + (−3, 6) = (3, 8). y (3, 8) v3 (-3, 6) v2 (6, 2) v1 x Figure 61: Figure for Problem 1 2. v1 = (−3, −12), v2 = (20, −4), v3 = (−3, −12) + (20, −4) = (17, −16). y x v2 v1 (20, -4) v3 (-3, -12) (17, -16) Figure 62: Figure for Problem 2 3. v = 5(3, −1, 2, 5) − 7(−1, 2, 9, −2) = (15, −5, 10, 25) − (−7, 14, 63, −14) = (22, −19, −53, 39). Additive inverse: −v = (−1)v = (−22, 19, 53, −39). 4. We have y= 1 2 1 2 x + z = (1, 2, 3, 4, 5) + (−1, 0, −4, 1, 2) = 3 3 3 3 142 , , , 3, 4 . 333 5. Let x = (x1 , x2 , x3 , x4 ), y = (y1 , y2 , y3 , y4 ) be arbitrary vectors in R4 . Then x+y = (x1 , x2 , x3 , x4 ) + (y1 , y2 , y3 , y4 ) = (x1 + y1 , x2 + y2 , x3 + y3 , x4 + y4 ) = (y1 + x1 , y2 + x2 , y3 + x3 , y4 + x4 ) = (y1 , y2 , y3 , y4 ) + (x1 , x2 , x3 , x4 ) = y + x. 250 6. Let x = (x1 , x2 , x3 , x4 ), y = (y1 , y2 , y3 , y4 ), z = (z1 , z2 , z3 , z4 ) be arbitrary vectors in R4 . Then x + (y + z) = = = = = (x1 , x2 , x3 , x4 ) + (y1 + z1 , y2 + z2 , y3 + z3 , y4 + z4 ) (x1 + (y1 + z1 ), x2 + (y2 + z2 ), x3 + (y3 + z3 ), x4 + (y4 + z4 )) ((x1 + y1 ) + z1 , (x2 + y2 ) + z2 , (x3 + y3 ) + z3 , (x4 + y4 ) + z4 ) (x1 + y1 , x2 + y2 , x3 + y3 , x4 + y4 ) + (z1 , z2 , z3 , z4 ) (x + y) + z. 7. Let x = (x1 , x2 , x3 ) and y = (y1 , y2 , y3 ) be arbitrary vectors in R3 , and let r, s, t be arbitrary real numbers. Then: 1x = 1(x1 , x2 , x3 ) = (1x1 , 1x2 , 1x3 ) = (x1 , x2 , x3 ) = x. (st)x = (st)(x1 , x2 , x3 ) = ((st)x1 , (st)x2 , (st)x3 ) = (s(tx1 ), s(tx2 ), s(tx3 )) = s(tx1 , tx2 , tx3 ) = s(tx). r(x + y) = r(x1 + y1 , x2 + y2 , x3 + y3 ) = (r(x1 + y1 ), r(x2 + y2 ), r(x3 + y3 )) = (rx1 + ry1 , rx2 + ry2 , rx3 + ry3 ) = (rx1 , rx2 , rx3 ) + (ry1 , ry2 , ry3 ) = r x + r y. (s + t)x = (s + t)(x1 , x2 , x3 ) = ((s + t)x1 , (s + t)x2 , (s + t)x3 ) = (sx1 + tx1 , sx2 + tx2 , sx3 + tx3 ) = (sx1 , sx2 , sx3 ) + (tx1 , tx2 , tx3 ) = sx + tx. 8. For example, if x = (2, 2) and y = (−1, −1), then x + y = (1, 1) lies in the ﬁrst quadrant. If x = (1, 2) and y = (−5, 1), then x + y = (−4, 3) lies in the second quadrant. If x = (1, 1) and y = (−2, −2), then x + y = (−1, −1) lies in the third quadrant. If x = (2, 1) and y = (−1, −5), then x + y = (1, −4) lies in the fourth quadrant. Solutions to Section 4.2 True-False Review: 1. TRUE. This is part 1 of Theorem 4.2.6. 2. FALSE. The statement would be true if it was required that v be nonzero. However, if v = 0, then rv = sv = 0 for all values of r and s, and r and s need not be equal. We conclude that the statement is not true. 3. FALSE. This set is not closed under scalar multiplication. In particular, if k is an irrational number such as k = π and v is an integer, then k v is not an integer. 4. TRUE. We have (x + y) + ((−x) + (−y)) = (x + (−x)) + (y + (−y)) = 0 + 0 = 0, where we have used the vector space axioms in these steps. Therefore, the additive inverse of x + y is (−x) + (−y). 5. TRUE. This is part 1 of Theorem 4.2.6. 6. TRUE. This is called the trivial vector space. Since 0 + 0 = 0 and k · 0 = 0, it is closed under addition and scalar multiplication. Both sides of the remaining axioms yield 0, and 0 is the zero vector, and it is its own additive inverse. 251 7. FALSE. This set is not closed under addition, since 1 + 1 ∈ {0, 1}. Therefore, (A1) fails, and hence, this set does not form a vector space. (It is worth noting that the set is also not closed under scalar multiplication.) 8. FALSE. This set is not closed under scalar multiplication. If k < 0 and x is a positive real number, the result kx is a negative real number, and therefore no longer belongs to the set of positive real numbers. Problems: 1. If x = p/q and y = r/s, where p, q, r, s are integers (q = 0, s = 0), then x + y = (ps + qr)/(qs), which is a rational number. Consequently, the set of all rational numbers is closed under addition. The set is not closed under scalar multiplication since, if we multiply a rational number by an irrational number, the result is an irrational number. 2. Let A = [aij ] and B = [bij ] be upper triangular matrices, and let k be an arbitrary real number. Then whenever i > j , it follows that aij = 0 and bij = 0. Consequently, aij + bij = 0 whenever i > j , and kaij = 0 whenever i > j . Therefore, A + B and kA are upper triangular matrices, so the set of all upper triangular matrices with real elements is closed under both addition and scalar multiplication. 3. V = {y : y + 9y = 4x2 } is not a vector space because it is not closed under vector addition. Let u, v ∈ V . Then u + 9u = 4x2 and v + 9v = 4x2 . It follows that (u + v ) + 9(u + v ) = (u + v ) + 9(u + v ) = u + 9u + v + 9v = 4x2 + 4x2 = 8x2 = 4x2 . Thus, u + v ∈ V . Likewise, V is not closed under scalar / multiplication. 4. V = {y : y + 9y = 0 for all x ∈ I } is closed under addition and scalar multiplication, as we now show: A1: Addition: For u, v ∈ V, u + 9u = 0 and v + 9v = 0, so (u + v ) + 9(u + v ) = u + v + 9u + 9v = u + 9u + v + 9v = 0 + 0 = 0 =⇒ u + v ∈ V , therefore we have closure under addition. A2: Scalar Multiplication: If α ∈ R and u ∈ V , then (αu) + 9(αu) = αu + 9αu = α(u + 9u) = α · 0 = 0, so we also have closure under multiplication. 5. V = {x ∈ Rn : Ax = 0, where A is a ﬁxed matrix} is closed under addition and scalar multiplication, as we now show: Let u, v ∈ V and k ∈ R. A1: Addition: A(u + v) = Au + Av = 0 + 0 = 0, so u + v ∈ V . A2: Scalar Multiplication: A(k u) = kAu = k 0 = 0, thus k u ∈ V . 0 0 6. (a): The zero vector in M2 (R) is 02 = 1 0 (b): Let A = A+B = 1 0 0 1 0 0 and B = 0 0 0 1 0 0 . Since det(02 ) = 0, 02 is an element of S . . Then det(A) = 0 = det(B ), so that A, B ∈ S . However, =⇒ det(A + B ) = 1, so that A + B ∈ S . Consequently, S is not closed under addition. / (c): YES. Note that det(cA) = c2 det(A), so if det(A) = 0, then det(cA) = 0. 7. (1) N is not closed under scalar multiplication, since multiplication of a positive integer by a real number does not, in general, result in a positive integer. 252 (2) There is no zero vector in N. (3) No element of N has an additive inverse in N. 8. Let V be R2 , i.e. {(x, y ) : x, y ∈ R}. With addition and scalar multiplication as deﬁned in the text, this set is clearly closed under both operations. A3: u + v = (u1 , u2 ) + (v1 , v2 ) = (u1 + v1 , u2 + v2 ) = (v1 + u1 , v2 + u2 ) = v + u. A4: [u + v]+ w = [(u1 + v1 , u2 + v2 )]+(w1 , w2 ) = ([u1 + v1 ]+ w1 , [u2 + v2 ]+ w2 ) = (u1 +[v1 + w1 ], u2 +[v2 + w2 ]) = (u1 , u2 ) + [(v1 + w1 , v2 + w2 )] = u + [v + w]. A5: 0 = (0, 0) since (x, y ) + (0, 0) = (x + 0, y + 0) = (x, y ). A6: If u = (a, b), then −u = (−a, −b), a, b ∈ R, since (a, b) + (−a, −b) = (a − a, b − b) = (0, 0). Now, let u = (u1 , u2 ), v = (v1 , v2 ), where u1 , u2 , v1 , v2 ∈ R, and let r, s, t ∈ R. A7: 1 · v = 1(v1 , v2 ) = (1 · v1 , 1 · v2 ) = (v1 , v2 ) = v. A8: (rs)v = (rs)(v1 , v2 ) = ((rs)v1 , (rs)v2 ) = (rsv1 , rsv2 ) = (r(sv1 ), r(sv2 )) = r(sv1 , sv2 ) = r(sv). A9: r(u + v) = r((u1 , u2 ) + (v1 , v2 )) = r(u1 + v1 , u2 + v2 ) = (r(u1 + v1 ), r(u2 + v2 )) = (ru1 + rv1 , ru2 + rv2 ) = (ru1 , ru2 ) + (rv1 , rv2 ) = r(u1 , u2 ) + r(v1 , v2 ) = ru + rv. A10: (r + s)u = (r + s)(u1 , u2 ) = ((r + s)u1 , (r + s)u2 ) = (ru1 + su1 , ru2 + su2 ) = (ru1 , ru2 ) + (su1 , su2 ) = r(u1 , u2 ) + s(u1 , u2 ) = ru + su. Thus, R2 is a vector space. 9. Let A = ab de c f ∈ M2×3 (R). Then we see that the zero vector is 02×3 = A + 02×3 = A. Further, A has additive inverse −A = −a −b −c −d −e −f 0 0 0 0 0 0 , since since A + (−A) = 02×3 . 10. The zero vector of Mm×n (R) is 0m×n , the m × n zero matrix. The additive inverse of the m × n matrix A with (i, j )-element aij is the m × n matrix −A with (i, j )-element −aij . 11. V = {p : p is a polynomial in x of degree 2}. V is not a vector space because it is not closed under addition. For example, x2 ∈ V and −x2 ∈ V , yet x2 + (−x2 ) = 0 ∈ V . / 12. YES. We verify the ten axioms of a vector space. A1: The product of two positive real numbers is a positive real number, so the set is closed under addition. A2: Any power of a positive real number is a positive real number, so the set is closed under scalar multiplication. A3: We have x + y = xy = yx = y + x for all x, y ∈ R+ , so commutativity under addition holds. A4: We have (x + y ) + z = (xy ) + z = (xy )z = x(yz ) = x(y + z ) = x + (y + z ) for all x, y, z ∈ R+ , so that associativity under addition holds. A5: We claim that the zero vector in this set is the real number 1. To see this, note that 1 + x = 1x = x = x1 = x + 1 for all x ∈ R+ . 1 1 1 A6: We claim that the additive inverse of the vector x ∈ R+ is x ∈ R+ . To see this, note that x + x = x x = 1 1 1 = x x = x + x. A7: Note that 1 · x = x1 = x for all x ∈ R+ , so that the unit property holds. A8: For all r, s ∈ R and x ∈ R+ , we have (rs) · x = xrs = xsr = (xs )r = r · xs = r · (s · x), as required for associativity of scalar multiplication. 253 A9: For all r ∈ R and x, y ∈ R+ , we have r · (x + y ) = r · (xy ) = (xy )r = xr y r = xr + y r = r · x + r · y, as required for distributivity of scalar multiplication over vector addition. A10: For all r, s ∈ R and x ∈ R+ , we have (r + s) · x = xr+s = xr xs = xr + xs = r · x + s · x, as required for distributivity of scalar multiplication over scalar addition. The above veriﬁcation of axioms A1-A10 shows that we have a vector space structure here. 13. Axioms A1 and A2 clearly hold under the given operations. A3: u + v = (u1 , u2 ) + (v1 , v2 ) = (u1 − v1 , u2 − v2 ) = (−(v1 − u1 ), −(v2 − u2 )) = (v1 − u1 , v2 − u2 ) = v + u. Consequently, A3 does not hold. A4: (u+v)+w = (u1 −v1 , u2 −v2 )+(w1 , w2 ) = ((u1 −v1 )−w1 , (u2 −v2 )−w2 ) = (u1 −(v1 +w1 ), u2 −(v2 +w2 )) = (u1 , u2 ) + (v1 + w1 , v2 + w2 ) = (u1 , u2 ) + (v1 − w1 , v2 − w2 ) = u + (v + w). Consequently, A4 does not hold. A5: 0 = (0, 0) since u + 0 = (u1 , u2 ) + (0, 0) = (u1 − 0, u2 − 0) = (u1 , u2 ) = u. A6: If u = (u1 , u2 ), then −u = (u1 , u2 ) since u + (−u) = (u1 , u2 ) + (u1 , u2 ) = (u1 − u1 , u2 − u2 ) = (0, 0) = 0. Each of the remaining axioms do not hold. 14. Axiom A5: The zero vector on R2 with the deﬁned operation of addition is given by (1,1), for if (u1 , u2 ) is any element in R2 , then (u1 , u2 ) + (1, 1) = (u1 · 1, u2 · 1) = (u1 , u2 ). Thus, Axiom A5 holds. Axiom A6: Suppose that (u1 , v1 ) is any element in R2 with additive inverse (a, b). From the ﬁrst part of the problem, we know that (1, 1) is the zero element, so it must be the case that (u1 , v1 ) + (a, b) = (1, 1) so that (u1 a, v1 b) = (1, 1); hence, it follows that u1 a = 1 and v1 b = 1, but this system is not satisﬁed for all (u1 , v1 ) ∈ R, namely, (0, 0). Thus, Axiom A6 is not satisﬁed. 15. Let A, B, C ∈ M2 (R) and r, s, t ∈ R. A3: The addition operation is not commutative since A + B = AB = BA = B + A. A4: Addition is associative since (A + B ) + C = AB + C = (AB )C = A(BC ) = A(B + C ) = A + (B + C ). 10 is the zero vector in M2 (R) because A + I2 = AI2 = A for all A ∈ M2 (R). 01 A6: We wish to determine whether for each matrix A ∈ M2 (R) we can ﬁnd a matrix B ∈ M2 (R) such that A + B = I2 (remember that we have shown in A5 that the zero vector is I2 ), equivalently, such that AB = I2 . However, this equation can be satisﬁed only if A is nonsingular, therefore the axiom fails. A7: 1 · A = A is true for all A ∈ M2 (R). A8: (st)A = s(tA) is true for all A ∈ M2 (R) and s, t ∈ R. A9: sA + tA = (sA)(tA) = st(AA) = (s + t)A for all s, t ∈ R and A ∈ M2 (R). Consequently, the axiom fails. A10: rA + rB = (rA)(rB ) = r2 AB = r2 (A + B ). Thus, rA + rB = rA + rB for all r ∈ R, so the axiom fails. A5: I2 = 16. M2 (R) = {A : A is a 2 × 2 real matrix}. Let A, B ∈ M2 (R) and k ∈ R. A1: A ⊕ B = −(A + B ). A2: k · A = −kA. A3 and A4: A ⊕ B = −(A + B ) = −(B + A) = B ⊕ A. Hence, the operation ⊕ is commutative. 254 (A ⊕ B ) ⊕ C = −((A ⊕ B ) + C ) = −(−(A + B ) + C ) = A + B − C , but A ⊕ (B ⊕ C ) = −(A + (B ⊕ C )) = −(A + (−(B + C )) = −A + B + C . Thus the operation ⊕ is not associative. A5: An element B is needed such that A ⊕ B = A for all A ∈ M2 (R), but −(A + B ) = A =⇒ B = −2A. Since this depends on A, there is no zero vector. A6: Since there is no zero vector, we cannot deﬁne the additive inverse. Let r, s, t ∈ R. A7: 1 · A = −1A = −A = A. A8: (st) · A = −[(st)A], but s · (t · A) = s · (−tA) = −[s(−tA)] = s(tA) = (st)A, so it follows that (st) · A = s · (t · A). A9: r · (A ⊕ B ) = −r(A ⊕ B ) = −r(−(A + B )) = r(A + B ) = rA + rB = −[(−rA)+(−rB )] = −rA ⊕ (−rB ) = r · A + r · B , whereas rA ⊕ rB = −(rA + rB ) = −rA − rB , so this axiom fails to hold. A10: (s + t) · A = −(s + t)A = sA +(−tA), but s · A ⊕ t · A = −(sA) ⊕ (−tA) = −[−(sA)+(−tA)] = sA + tA, hence (s + t) · A = s · A ⊕ t · A. We conclude that only the axioms (A1)-(A3) hold. 17. Let C2 = {(z1 , z2 ) : zi ∈ C} under the usual operations of addition and scalar multiplication. A3 and A4: Follow from the properties of addition in C2 . A5: (0, 0) is the zero vector in C2 since (z1 , z2 ) + (0, 0) = (z1 + 0, z2 + 0) = (z1 , z2 ) for all (z1 , z2 ) ∈ C2 . A6: The additive inverse of the vector (z1 , z2 ) ∈ C2 is the vector (−z1 , −z2 ) for all (z1 , z2 ) ∈ C2 . A7-A10: Follows from properties in C2 . Thus, C2 together with its deﬁned operations, is a complex vector space. 18. Let M2 (C) = {A : A is a 2 × 2 matrix with complex entries} under the usual operations of matrix addition and multiplication. A3 and A4: Follows from properties of matrix addition. A5: The zero vector, 0, for M2 (C) is the 2 × 2 zero matrix, 02 . A6: For each vector A = [aij ] ∈ M2 (C), the vector −A = [−aij ] ∈ M2 (C) satisﬁes A + (−A) = 02 . A7-A10: Follow from properties of matrix algebra. Hence, M2 (C) together with its deﬁned operations, is a complex vector space. 19. Let u = (u1 , u2 , u3 ) and v = (v1 , v2 , v3 ) be vectors in C3 , and let k ∈ R. A1: u + v = (u1 , u2 , u3 ) + (v1 , v2 , v3 ) = (u1 + v1 , u2 + v2 , u3 + v3 ) ∈ C3 . A2: k u = k (u1 , u2 , u3 ) = (ku1 , ku2 , ku3 ) ∈ C3 . A3 and A4: Satisﬁed by the properties of addition in C3 . A5: (0, 0, 0) is the zero vector in C3 since (0, 0, 0) + (z1 , z2 , z3 ) = (0 + z1 , 0 + z2 , 0 + z3 ) = (z1 , z2 , z3 ) for all (z1 , z2 , z3 ) ∈ C3 . A6: (−z1 , −z2 , −z3 ) is the additive inverse of (z1 , z2 , z3 ) because (z1 , z2 , z3 ) + (−z1 , −z2 , −z3 ) = (0, 0, 0) for all (z1 , z2 , z3 ) ∈ C3 . Let r, s, t ∈ R. A7: 1 · u = 1 · (u1 , u2 , u3 ) = (1u1 , 1u2 , 1u3 ) = (u1 , u2 , u3 ) = u. A8: (st)u = (st)(u1 , u2 , u3 ) = ((st)u1 , (st)u2 , (st)u3 ) = (s(tu1 ), s(tu2 ), s(tu3 )) = s(tu1 , tu2 , tu3 ) = s(t(u1 , u2 , u3 )) = s(tu). A9: r(u + v) = r(u1 + v1 , u2 + v2 , u3 + v3 ) = (r(u1 + v1 ), r(u2 + v2 ), r(u3 + v3 )) = (ru1 + rv1 , ru2 + rv2 , ru3 + rv3 ) = (ru1 , ru2 , ru3 ) + (rv1 , rv2 , rv3 ) = r(u1 , u2 , u3 ) + r(v1 , v2 , v3 ) = ru + rv. A10: (s + t)u = (s + t)(u1 , u2 , u3 ) = ((s + t)u1 , (s + t)u2 , (s + t)u3 ) = (su1 + tu1 , su2 + tu2 , su3 + tu3 ) = (su1 , su2 , su3 ) + (tu1 , tu2 , tu3 ) = s(u1 , u2 , u3 ) + t(u1 , u2 , u3 ) = su + tu. Thus, C3 is a real vector space. 20. NO. If we scalar multiply a vector (x, y, z ) ∈ R3 by a non-real (complex) scalar r, we will obtain the vector (rx, ry, rz ) ∈ R3 , since rx, ry, rz ∈ R. 21. Let k be an arbitrary scalar, and let u be an arbitrary vector in V . Then, using property 2 of Theorem 255 4.2.6, we have k 0 = k (0u) = (k 0)u = 0u = 0. 22. Assume that k is a scalar and u ∈ V such that k u = 0. If k = 0, the desired conclusion is already 1 reached. If, on the other hand, k = 0, then we have k ∈ R and 1 1 · (k u) = · 0 = 0, k k or 1 · k u = 0. k Hence, 1 · u = 0, and the unit property A7 now shows that u = 0, and again, we reach the desired conclusion. 23. We verify the axioms A1-A10 for a vector space. A1: If a0 + a1 x + · · · + an xn and b0 + b1 x + · · · + bn xn belong to Pn , then (a0 + a1 x + · · · + an xn ) + (b0 + b1 x + · · · + bn xn ) = (a0 + b0 ) + (a1 + b1 )x + · · · + (an + bn )xn , which again belongs to Pn . Therefore, Pn is closed under addition. A2: If a0 + a1 x + · · · + an xn and r is a scalar, then r · (a0 + a1 x + · · · + an xn ) = (ra0 ) + (ra1 )x + · · · + (ran )xn , which again belongs to Pn . Therefore, Pn is closed under scalar multiplication. A3: Let p(x) = a0 + a1 x + · · · + an xn and q (x) = b0 + b1 x + · · · + bn xn belong to Pn . Then p(x) + q (x) = (a0 + a1 x + · · · + an xn ) + (b0 + b1 x + · · · + bn xn ) = (a0 + b0 ) + (a1 + b1 )x + · · · + (an + bn )xn = (b0 + a0 ) + (b1 + a1 )x + · · · + (bn + an )xn = (b0 + b1 x + · · · + bn xn ) + (a0 + a1 x + · · · + an xn ) = q (x) + p(x), so Pn satisﬁes commutativity under addition. A4: Let p(x) = a0 + a1 x + · · · + an xn , q (x) = b0 + b1 x + · · · + bn xn , and r(x) = c0 + c1 x + · · · + cn xn belong to Pn . Then [p(x) + q (x)] + r(x) = [(a0 + a1 x + · · · + an xn ) + (b0 + b1 x + · · · + bn xn )] + (c0 + c1 x + · · · + cn xn ) = [(a0 + b0 ) + (a1 + b1 )x + · · · + (an + bn )xn ] + (c0 + c1 x + · · · + cn xn ) = [(a0 + b0 ) + c0 ] + [(a1 + b1 ) + c1 ]x + · · · + [(an + bn ) + cn ]xn = [a0 + (b0 + c0 )] + [a1 + (b1 + c1 )]x + · · · + [an + (bn + cn )]xn = (a0 + a1 x + · · · + an xn ) + [(b0 + c0 ) + (b1 + c1 )x + · · · + (bn + cn )xn ] = (a0 + a1 x + · · · + an xn ) + [(b0 + b1 x + · · · + bn xn ) + (c0 + c1 x + · · · + cn xn )] = p(x) + [q (x) + r(x)], so Pn satisﬁes associativity under addition. A5: The zero vector is the zero polynomial z (x) = 0 + 0 · x + · · · + 0 · xn , and it is readily veriﬁed that this polynomial satisﬁes z (x) + p(x) = p(x) = p(x) + z (x) for all p(x) ∈ Pn . A6: The additive inverse of p(x) = a0 + a1 x + · · · + an xn is −p(x) = (−a0 ) + (−a1 )x + · · · + (−an )xn . 256 It is readily veriﬁed that p(x) + (−p(x)) = z (x), where z (x) is deﬁned in A5. A7: We have 1 · (a0 + a1 x + · · · + an xn ) = a0 + a1 x + · · · + an xn , which demonstrates the unit property in Pn . A8: Let r, s ∈ R, and p(x) = a0 + a1 x + · · · + an xn ∈ Pn . Then (rs) · p(x) = (rs) · (a0 + a1 x + · · · + an xn ) = [(rs)a0 ] + [(rs)a1 ]x + · · · + [(rs)an ]xn = r[(sa0 ) + (sa1 )x + · · · + (san )xn ] = r[s(a0 + a1 x + · · · + an xn )] = r · (s · p(x)), which veriﬁes the associativity of scalar multiplication. A9: Let r ∈ R, let p(x) = a0 + a1 x + · · · + an xn ∈ Pn , and let q (x) = b0 + b1 x + · · · + bn xn ∈ Pn . Then r · (p(x) + q (x)) = r · ((a0 + a1 x + · · · + an xn ) + (b0 + b1 x + · · · + bn xn )) = r · [(a0 + b0 ) + (a1 + b1 )x + · · · + (an + bn )xn ] = [r(a0 + b0 )] + [r(a1 + b1 )]x + · · · + [r(an + bn )]xn = [(ra0 ) + (ra1 )x + · · · + (ran )xn ] + [(rb0 ) + (rb1 )x + · · · + (rbn )xn ] = [r(a0 + a1 x + · · · + an xn )] + [r(b0 + b1 x + · · · + bn xn )] = r · p(x) + r · q (x), which veriﬁes the distributivity of scalar multiplication over vector addition. A10: Let r, s ∈ R and let p(x) = a0 + a1 x + · · · + an xn ∈ Pn . Then (r + s) · p(x) = (r + s) · (a0 + a1 x + · · · + an xn ) = [(r + s)a0 ] + [(r + s)a1 ]x + · · · + [(r + s)an ]xn = [ra0 + ra1 x + · · · + ran xn ] + [sa0 + sa1 x + · · · + san xn ] = r(a0 + a1 x + · · · + an xn ) + s(a0 + a1 x + · · · + an xn ) = r · p(x) + s · p(x), which veriﬁes the distributivity of scalar multiplication over scalar addition. The above veriﬁcation of axioms A1-A10 shows that Pn is a vector space. Solutions to Section 4.3 True-False Review: 1. FALSE. The null space of an m × n matrix A is a subspace of Rn , not Rm . 2. FALSE. It is not necessarily the case that 0 belongs to the solution set of the linear system. In fact, 0 belongs to the solution set of the linear system if and only if the system is homogeneous. 3. TRUE. If b = 0, then the line is y = mx, which is a line through the origin of R2 , a one-dimensional subspace of R2 . On the other hand, if b = 0, then the origin does not lie on the given line, and therefore since the line does not contain the zero vector, it cannot form a subspace of R2 in this case. 257 4. FALSE. The spaces Rm and Rn , with m < n, are not comparable. Neither of them is a subset of the other, and therefore, neither of them can form a subspace of the other. 5. TRUE. Choosing any vector v in S , the scalar multiple 0v = 0 still belongs to S . 6. FALSE. In order for a subset of V to form a subspace, the same operations of addition and scalar multiplication must be used in the subset as used in V . 7. FALSE. This set is not closed under addition. For instance, the point (1, 1, 0) lies in the xy -plane, the point (0, 1, 1) lies in the yz -plane, but (1, 1, 0) + (0, 1, 1) = (1, 2, 1) does not belong to S . Therefore, S is not a subspace of V . 8. FALSE. For instance, if we consider V = R3 , then the xy -plane forms a subspace of V , and the x-axis forms a subspace of V . Both of these subspaces contain in common all points along the x-axis. Other examples abound as well. Problems: 1. S = {x ∈ R2 : x = (2k, −3k ), k ∈ R}. (a) S is certainly nonempty. Let x, y ∈ S . Then for some r, s ∈ R, x = (2r, −3r) and y = (2s, −3s). Hence, x + y = (2r, −3r) + (2s, −3s) = (2(r + s), −3(r + s)) = (2k, −3k ), where k = r + s. Consequently, S is closed under addition. Further, if c ∈ R, then cx = c(2r, −3r) = (2cr, −3cr) = (2t, −3t), where t = cr. Therefore S is also closed under scalar multiplication. It follows from Theorem 4.3.2 that S is a subspace of R2 . (b) The subspace S consists of all points lying along the line in the accompanying ﬁgure. y y=-3x/2 x Figure 63: Figure for Problem 1(b) 2. S = {x ∈ R3 : x = (r − 2s, 3r + s, s), r, s ∈ R}. 258 (a) S is certainly nonempty. Let x, y ∈ S . Then for some r, s, u, v ∈ R, x = (r − 2s, 3r + s, s) and y = (u − 2v, 3u + v, v ). Hence, x + y = (r − 2s, 3r + s, s) + (u − 2v, 3u + v, v ) = ((r + u) − 2(s + v ), 3(r + u) + (s + v ), s + v ) = (a − 2b, 3a + b, b), where a = r + u, and b = s + v . Consequently, S is closed under addition. Further, if c ∈ R, then cx = c(r − 2s, 3r + s, s) = (cr − 2cs, 3cr + cs, cs) = (k − 2l, 3k + l, l), where k = cr and l = cs. Therefore S is also closed under scalar multiplication. It follows from Theorem 4.3.2 that S is a subspace of R3 . (b) The coordinates of the points in S are (x, y, z ) where x = r − 2s, y = 3r + s, z = s. Eliminating r and s between these three equations yields 3x − y + 7z = 0. 3. S = {(x, y ) ∈ R2 : 3x + 2y = 0}. S = ∅ since (0, 0) ∈ S . Closure under Addition: Let (x1 , x2 ), (y1 , y2 ) ∈ S . Then 3x1 + 2x2 = 0 and 3y1 + 2y2 = 0, so 3(x1 + y1 ) + 2(x2 + y2 ) = 0, which implies that (x1 + y1 , x2 + y2 ) ∈ S . Closure under Scalar Multiplication: Let a ∈ R and (x1 , x2 ) ∈ S . Then 3x1 + 2x2 = 0 =⇒ a(3x1 + 2x2 ) = a · 0 =⇒ 3(ax1 ) + 2(ax2 ) = 0, which shows that (ax1 , ax2 ) ∈ S . Thus, S is a subspace of R2 by Theorem 4.3.2. 4. S = {(x1 , 0, x3 , 2) : x1 , x3 ∈ R} is not a subspace of R4 because it is not closed under addition. To see this, let (a, 0, b, 2), (c, 0, d, 2) ∈ S . Then (a, 0, b, 2) + (c, 0, d, 2) = (a + c, 0, b + d, 4) ∈ S . / 5. S = {(x, y, z ) ∈ R3 : x + y + z = 1} is not a subspace of R3 because (0, 0, 0) ∈ S since 0 + 0 + 0 = 1. / 6. S = {u ∈ R2 : Au = b, A is a ﬁxed m × n matrix} is not a subspace of Rn since 0 ∈ S . / 7. S = {(x, y ) ∈ R2 : x2 − y 2 = 0} is not a subspace of R2 , since it is not closed under addition, as we now observe: If (x1 , y1 ), (x2 , y2 ) ∈ S , then (x1 , y1 ) + (x2 , y2 ) = (x1 + x2 , y1 + y2 ). 2 2 (x1 + x2 )2 − (y1 + y2 )2 = x2 + 2x1 x2 + x2 − (y1 + 2y1 y2 + y2 ) 1 2 2 2 = (x2 − y1 ) + (x2 − y2 ) + 2(x1 x2 − y1 y2 ) 1 2 = 0 + 0 + 2(x1 x2 − y1 y2 ) = 0, in general. Thus, (x1 , y1 ) + (x2 , y2 ) ∈ S . / 8. S = {A ∈ M2 (R) : det(A) = 1} is not a subspace of M2 (R). To see this, let k ∈ R be a scalar and let A ∈ S . Then det(kA) = k 2 det(A) = k 2 · 1 = k 2 = 1, unless k = ±1. Note also that det(A) = 1 and det(B ) = 1 does not imply that det(A + B ) = 1. 9. S = {A = [aij ] ∈ Mn (R) : aij = 0 whenever i < j }. Note that S = ∅ since 0n ∈ S . Now let A = [aij ] and B = [bij ] be lower triangular matrices. Then aij = 0 and bij = 0 whenever i < j . Then aij + bij = 0 259 and caij = 0 whenever i < j . Hence A + B = [aij + bij ] and cA = [caij ] are also lower triangular matrices. Therefore S is closed under addition and scalar multiplication. Consequently, S is a subspace of M2 (R) by Theorem 4.3.2. 10. S = {A ∈ Mn (R) : A is invertible} is not a subspace of Mn (R) because 0n ∈ S . / 11. S = {A ∈ M2 (R) : AT = A}. S = ∅ since 02 ∈ S . Closure under Addition: If A, B ∈ S , then (A + B )T = AT + B T = A + B , which shows that A + B ∈ S . Closure under Scalar Multiplication: If r ∈ R and A ∈ S , then (rA)T = rAT = rA, which shows that rA ∈ S . Consequently, S is a subspace of M2 (R) by Theorem 4.3.2. 12. S = {A ∈ M2 (R) : AT = −A}. S = ∅ because 02 ∈ S . Closure under Addition: If A, B ∈ S , then (A + B )T = AT + B T = −A + (−B ) = −(A + B ), which shows that A + B ∈ S . Closure under Scalar Multiplication: If k ∈ R and A ∈ S , then (kA)T = kAT = k (−A) = −(kA), which shows that kA ∈ S . Thus, S is a subspace of M2 (R) by Theorem 4.3.2. 13. S = {f ∈ V : f (a) = f (b)}, where V is the vector space of all real-valued functions deﬁned on [a, b]. Note that S = ∅ since the zero function O(x) = 0 for all x belongs to S . Closure under Addition: If f, g ∈ S , then (f + g )(a) = f (a) + g (a) = f (b) + g (b) = (f + g )(b), which shows that f + g ∈ S . Closure under Scalar Multiplication: If k ∈ R and f ∈ S , then (kf )(a) = kf (a) = kf (b) = (kf )(b), which shows that kf ∈ S . Therefore S is a subspace of V by Theorem 4.3.2. 14. S = {f ∈ V : f (a) = 1}, where V is the vector space of all real-valued functions deﬁned on [a, b]. We claim that S is not a subspace of V . Not Closed under Addition: If f, g ∈ S , then (f + g )(a) = f (a) + g (a) = 1 + 1 = 2 = 1, which shows that f + g ∈ S. / It can also be shown that S is not closed under scalar multiplication. 15. S = {f ∈ V : f (−x) = f (x) for all x ∈ R}. Note that S = ∅ since the zero function O(x) = 0 for all x belongs to S . Let f, g ∈ S . Then (f + g )(−x) = f (−x) + g (−x) = f (x) + g (x) = (f + g )(x) and if c ∈ R, then (cf )(−x) = cf (−x) = cf (x) = (cf )(x), so f + g and c · f belong to S . Therefore, S is closed under addition and scalar multiplication. Therefore, S is a subspace of V by Theorem 4.3.2. 16. S = {p ∈ P2 : p(x) = ax2 + b, a, b ∈ R}. Note that S = ∅ since p(x) = 0 belongs to S . Closure under Addition: Let p, q ∈ S . Then for some a1 , a2 , b1 , b2 ∈ R, p(x) = a1 x2 + b1 and q (x) = a2 x2 + b2 . 260 Hence, (p + q )(x) = p(x) + q (x) = (a1 + a2 )x2 + b1 + b2 = ax2 + b, where a = a1 + a2 and b = b1 + b2 , so that S is closed under addition. Closure under Scalar Multiplication: If k ∈ R, then (kp)(x) = kp(x) = ka1 x2 + kb1 = cx2 + d, where c = ka1 and d = kb1 , so that S is also closed under scalar multiplication. It follows from Theorem 4.3.2 that S is a subspace of P2 . 17. S = {p ∈ P2 : p(x) = ax2 + 1, a ∈ R}. We claim that S is not closed under addition: Not Closed under Addition: Let p, q ∈ S . Then for some a1 , a2 ∈ R, p(x) = a1 x2 + 1 and q (x) = a2 x2 + 1. Hence, (p + q )(x) = p(x) + q (x) = (a1 + a2 )x2 + 2 = ax2 + 2 where a = a1 + a2 . Consequently, p + q ∈ S , and therefore S is not closed under addition. It follows that S / is not a subspace of P2 . 18. S = {y ∈ C 2 (I ) : y + 2y − y = 0}. Note that S is nonempty since the function y = 0 belongs to S . Closure under Addition: Let y1 , y2 ∈ S . (y1 + y2 ) + 2(y1 + y2 ) − (y1 + y2 ) = y1 + y2 + 2(y1 + y2 ) − y1 − y2 = y1 + 2y1 − y1 + y2 + 2y2 − y2 Thus, y1 + y2 ∈ S . = (y1 + 2y1 − y1 ) + (y2 + 2y2 − y2 ) = 0 + 0 = 0. Closure under Scalar Multiplication: Let k ∈ R and y1 ∈ S . (ky1 ) + 2(ky1 ) − (ky1 ) = ky1 + 2ky1 − ky1 = k (y1 + 2y1 − y1 ) = k · 0 = 0, which shows that ky1 ∈ S . Hence, S is a subspace of V by Theorem 4.3.2. 19. S = {y ∈ C 2 (I ) : y + 2y − y = 1}. S is not a subspace of V . We show that S fails to be closed under addition (one can also verify that it is not closed under scalar multiplication, but this is unnecessary if one shows the failure of closure under addition): Not Closed under Addition: Let y1 , y2 ∈ S . (y1 + y2 ) + 2(y1 + y2 ) − (y1 + y2 ) = y1 + y2 + 2(y1 + y2 ) − y1 − y2 = (y1 + 2y1 − y1 ) + (y2 + 2y2 − y2 ) Thus, y1 + y2 ∈ S . / = 1 + 1 = 2 = 1. Or alternatively: Not Closed under Scalar Multiplication: Let k ∈ R and y1 ∈ S . ((ky1 ) + 2(ky1 ) − (ky1 ) = ky1 + 2ky1 − ky1 = k (y1 + 2y1 − y1 ) = k · 1 = k = 1, unless k = 1. Therefore, ky1 ∈ S unless k = 1. / 1 −2 1 20. A = 4 −7 −2 . nullspace(A) = {x ∈ R3 : Ax = 0}. The reduced row echelon form of the −1 3 4 261 1000 augmented matrix of the system Ax = 0 is 0 1 0 0 . Consequently, the linear system Ax = 0 has 0010 only the trivial solution (0, 0, 0), so nullspace(A) = {(0, 0, 0)}. 1 3 −2 1 6 . nullspace(A) = {x ∈ R3 : Ax = 0}. The REDUCED ROW ECHELON 21. A = 3 10 −4 2 5 −6 −1 1 0 −8 −8 0 2 3 0 , so that nullspace(A) = FORM of the augmented matrix of the system Ax = 0 is 0 1 00 0 00 {(8r + 8s, −2r − 3s, r, s) : r, s ∈ R}. 1 i −2 4i −5 . nullspace(A) = {x ∈ C3 : Ax = 0}. The REDUCED ROW ECHELON 22. A = 3 −1 −3i i 1000 FORM of the augmented matrix of the system Ax = 0 is 0 1 0 0 . Consequently, the linear system 0010 Ax = 0 has only the trivial solution (0, 0, 0), so nullspace(A) = {(0, 0, 0)}. 23. Since the zero function y (x) = 0 for all x ∈ I is not a solution to the diﬀerential equation, the set of all solutions does not contain the zero vector from C 2 (I ), hence it is not a vector space at all and cannot be a subspace. 24. (a) As an example, we can let V = R2 . If we take S1 to be the set of points lying on the x-axis and S2 to be the set of points lying on the y -axis, then it is readily seen that S1 and S2 are both subspaces of V . However, the union of these subspaces is not closed under addition. For instance, the points (1, 0) and (0, 1) lie in S1 ∪ S2 , but (1, 0) + (0, 1) = (1, 1) ∈ S1 ∪ S2 . Therefore, the union S1 ∪ S2 does not form a subspace of V . (b) Since S1 and S2 are both subspaces of V , they both contain the zero vector. It follows that the zero vector is an element of S1 ∩ S2 , hence this subset is nonempty. Now let u and v be vectors in S1 ∩ S2 , and let c be a scalar. Then u and v are both in S1 and both in S2 . Since S1 and S2 are each subspaces of V , u + v and cu are vectors in both S1 and S2 , hence they are in S1 ∩ S2 . This implies that S1 ∩ S2 is a nonempty subset of V , which is closed under both addition and scalar multiplication. Therefore, S1 ∩ S2 is a subspace of V . (c) Note that S1 is a subset of S1 + S2 (every vector v ∈ S1 can be written as v + 0 ∈ S1 + S2 ), so S1 + S2 is nonempty. Closure under Addition: Now let u and v belong to S1 + S2 . We may write u = u1 + u2 and v = v1 + v2 , where u1 , v1 ∈ S1 and u2 , v2 ∈ S2 . Then u + v = (u1 + u2 ) + (v1 + v2 ) = (u1 + v1 ) + (u2 + v2 ). Since S1 and S2 are closed under addition, we have that u1 + v1 ∈ S1 and u2 + v2 ∈ S2 . Therefore, u + v = (u1 + u2 ) + (v1 + v2 ) = (u1 + v1 ) + (u2 + v2 ) ∈ S1 + S2 . Hence, S1 + S2 is closed under addition. 262 Closure under Scalar Multiplication: Next, let u ∈ S1 + S2 and let c be a scalar. We may write u = u1 + u2 , where u1 ∈ S1 and u2 ∈ S2 . Thus, c · u = c · u1 + c · u2 , and since S1 and S2 are closed under scalar multiplication, c · u1 ∈ S1 and c · u2 ∈ S2 . Therefore, c · u = c · u1 + c · u2 ∈ S1 + S2 . Hence, S1 + S2 is closed under scalar multiplication. Solutions to Section 4.4 True-False Review: 1. TRUE. By its very deﬁnition, when a linear span of a set of vectors is formed, that span becomes closed under addition and under scalar multiplication. Therefore, it is a subspace of V . 2. FALSE. In order to say that S spans V , it must be true that all vectors in V can be expressed as a linear combination of the vectors in S , not simply “some” vector. 3. TRUE. Every vector in V can be expressed as a linear combination of the vectors in S , and therefore, it is also true that every vector in W can be expressed as a linear combination of the vectors in S . Therefore, S spans W , and S is a spanning set for W . 4. FALSE. To illustrate this, consider V = R2 , and consider the spanning set {(1, 0), (0, 1), (1, 1)}. Then the vector v = (2, 2) can be expressed as a linear combination of the vectors in S in more than one way: v = 2(1, 1) and v = 2(1, 0) + 2(0, 1). Many other illustrations, using a variety of diﬀerent vector spaces, are also possible. 5. TRUE. To say that a set S of vectors in V spans V is to say that every vector in V belongs to span(S ). So V is a subset of span(S ). But of course, every vector in span(S ) belongs to the vector space V , and so span(S ) is a subset of V . Therefore, span(S ) = V . 6. FALSE. This is not necessarily the case. For example, the linear span of the vectors (1, 1, 1) and (2, 2, 2) is simply a line through the origin, not a plane. 7. FALSE. There are vector spaces that do not contain ﬁnite spanning sets. For instance, if V is the vector space consisting of all polynomials with coeﬃcients in R, then since a ﬁnite spanning set could not contain polynomials of arbitrarily large degree, no ﬁnite spanning set is possible for V . 8. FALSE. To illustrate this, consider V = R2 , and consider the spanning set S = {(1, 0), (0, 1), (1, 1)}. The proper subset S = {(1, 0), (0, 1)} is still a spanning set for V . Therefore, it is possible for a proper subset of a spanning set for V to still be a spanning set for V . abc 9. TRUE. The general matrix 0 d e in this vector space can be written as aE11 + bE12 + cE13 + 00f dE22 + eE23 + f E33 , and therefore the matrices in the set {E11 , E12 , E13 , E22 , E23 , E33 } span the vector space. 10. FALSE. For instance, it is easily veriﬁed that {x2 , x2 + x, x2 + 1} is a spanning set for P2 , and yet, it contains only polynomials of degree 2. 263 11. FALSE. For instance, consider m = 2 and n = 3. Then one spanning set for R2 is {(1, 0), (0, 1), (1, 1), (2, 2)}, which consists of four vectors. On the other hand, one spanning set for R3 is {(1, 0, 0), (0, 1, 0), (0, 0, 1)}, which consists of only three vectors. 12. TRUE. This is explained in True-False Review Question 7 above. Problems: 1. {(1, −1), (2, −2), (2, 3)}. Since v1 = (1, −1), and v2 = (2, 3) are noncolinear, the given set of vectors does span R2 . (See the comment preceding Example 4.4.3 in the text.) 2. The given set of vectors does not span R2 since it does not contain two nonzero and non-collinear vectors. 3. The three vectors in the given set are all collinear. Consequently, the set of vectors does not span R2 . 4. Since 1 −1 1 2 4 5 −2 3 1 = −23 = 0, the given vectors are not coplanar, and therefore span R3 . 12 4 −2 3 −1 = −7 = 0, the given vectors are not coplanar, and therefore span R3 . Note that we 11 2 can simply ignore the zero vector (0, 0, 0). 5. Since 6. Since 2 31 −1 −3 1 4 53 = 0, the vectors are coplanar, and therefore the given set does not span R3 . 134 2 4 5 = 0, the vectors are coplanar, and therefore the given set does not span R3 . The linear 356 span of the vectors is those points (x, y, z ) for which the system 7. Since c1 (1, 2, 3) + c2 (3, 4, 5) + c3 (4, 5, 6) = (x, y, z ) 134x 1 is consistent. Reducing the augmented matrix 2 4 5 y of this system yields 0 356z 0 This system is consistent if and only if x − 2y + z = 0. Consequently, the linear span vectors consists of all points lying on the plane with the equation x − 2y + z = 0. 3 2 0 of x 4 3 2x − y . 0 x − 2y + z the given set of 8. Let (x1 , x2 ) ∈ R2 and a, b ∈ R. (x1 , x2 ) = av1 + bv2 = a(2, −1) + b(3, 2) = (2a, −a) + (3b, 2b) = (2a + 3b, −a + 2b). It follows that 2x1 − 3x2 x1 + 2x2 2a + 3b = x1 and −a + 2b = x2 , which implies that a = and b = so (x1 , x2 ) = 7 7 2x1 − 3x2 x1 + 2x2 31 9 v1 + v2 . {v1 , v2 } spans R2 , and in particular, (5, −7) = v1 − v2 . 7 7 7 7 9. Let (x, y, z ) ∈ R3 and a, b, c ∈ R. (x, y, z ) = av1 + bv2 + cv3 = a(−1, 3, 2) + b(1, −2, 1) + c(2, 1, 1) = (−a, 3a, 2a) + (b, −2b, b) + (2c, c, c) = (−a + b + 2c, 3a − 2b + c, 2a + b + c). a + b + 2c = x These equalities result in the system: 3a − 2b + c = y 2a + b + c = z 264 Upon solving for a, b, and c we obtain a= −3x + y + 5z −x − 5y + 7z 7x + 3y − z ,b= , and c = . 16 16 16 Consequently, {v1 , v2 , v3 } spans R3 , and (x, y, z ) = −3x + y + 5z 16 v1 + −x − 5y + 7z 16 v2 + 7x + 3y − z 16 v3 . 10. Let (x, y ) ∈ R2 and a, b, c ∈ R. (x, y ) = av1 + bv2 + cv3 = a(1, 1) + b(−1, 2) + c(1, 4) = (a, a) + (−b, 2b) + (c, 4c) = (a − b + c, a + 2b + 4c). a−b+c=x These equalities result in the system: a + 2b + 4c = y Solving the system we ﬁnd that a= 2x + y − 6c y − x − 3c , and b = 3 3 where c is a free real variable. It follows that (x, y ) = 2x + y − 6c 3 v1 + y − x − 3c 3 v2 + cv3 , which implies that the vectors v1 , v2 , and v3 span R2 . Moreover, if c = 0, then (x, y ) = 2x + y 3 v1 + y−x 3 v2 , so R2 = span{v1 , v2 } also. 11. x = (c1 , c2 , c2 − 2c1 ) = (c1 , 0, −2c1 ) + (0, c2 , c2 ) = c1 (1, 0, −2) + c2 (0, 1, 1) = c1 v1 + c2 v2 . Thus, {(1, 0, −2), (0, 1, 1)} spans S . 12. v = (c1 , c2 , c2 − 2c1 , c1 − 2c2 ) = (c1 , 0, −c1 , c1 ) + (0, c2 , c2 , −2c2 ) = c1 (1, 0, −1, 1) + c2 (0, 1, 1, −2). Thus, {(1, 0, −1, 1), (0, 1, 1, −2)} spans S . 13. x − 2y − z = 0 =⇒ x = 2y + z , so v ∈ R3 . =⇒ v = (2y + z, y, z ) = (2y, y, 0) + (z, 0, z ) = y (2, 1, 0) + z (1, 0, 1). Therefore S = {v ∈ R3 : v = a(2, 1, 0) + b(1, 0, 1), a, b ∈ R}, hence {(2, 1, 0), (1, 0, 1)} spans S . 123 x1 0 14. nullspace(A) = {x ∈ R3 : Ax = 0}. Ax = 0 =⇒ 1 3 4 x2 = 0 . 246 x3 0 Performing Gauss-Jordan elimination on the augmented matrix of the system we obtain: 1230 1230 1010 1 3 4 0 ∼ 0 1 1 0 ∼ 0 1 1 0 . 2460 0000 0000 From the last matrix we ﬁnd that x1 = −r and x2 = −r where x3 = r, with r ∈ R. Consequently, x = (x1 , x2 , x3 ) = (−r, −r, r) = r(−1, −1, 1). 265 Thus, nullspace(A) = {x ∈ R3 : x = r(−1, −1, 1), r ∈ R} = span{(−1, −1, 1)}. 123 5 2 . nullspace(A) = {x ∈ R4 : Ax = 0}. The REDUCED ROW ECHELON FORM 15. A = 1 3 4 2 4 6 −1 10100 of the augmented matrix of this system is 0 1 1 0 0 . Consequently, nullspace(A) = {r(1, 1, −1, 0) : 00010 r ∈ R} = span{(1, 1, −1, 0)}. ab bc 16. A ∈ S =⇒ A = A= a0 00 where a, b, c ∈ R. Thus, 0b b0 + + 0 0 0 c 1 0 =a 0 0 +b 0 1 1 0 +c 0 0 0 1 = aA1 + bA2 + cA3 . Therefore S = span{A1 , A2 , A3 }. 17. S = 0α −α 0 A ∈ M2 (R) : A = ,α ∈ R = A ∈ M2 (R) : A = α 0 −1 1 0 = span 0 −1 1 0 18. (a) S = ∅ since 0 0 0 0 ∈ S. Closure under Addition: Let x, y ∈ S . Then, x11 0 x+y = x12 x22 + y11 0 y12 y22 = x11 + y11 0 x12 + y12 x22 + y22 , which implies that x + y ∈ S . Closure under Scalar Multiplication: Let r ∈ R and x ∈ S . Then rx = r x11 0 x12 x22 = rx11 0 rx12 rx22 , which implies thatrx ∈ S . Consequently, S is a subspace of M2 (R) by Theorem 4.3.2. (b) A = a11 0 a12 a22 Therefore, S = span 1 0 = a11 1 0 0 0 0 + a12 0 01 , , 00 01 00 00 01 + a22 0 0 0 1 . . 19. Let v ∈ span{v1 , v2 } and a, b ∈ R. v = av1 + bv2 = a(1, −1, 2) + b(2, −1, 3) = (a, −a, 2a) + (2b, −b, 3b) = (a + 2b, −a − b, 2a + 3b). Thus, span{v1 , v2 } = {v ∈ R3 : v = (a + 2b, −a − b, 2a + 3b), a, b ∈ R}. Geometrically, span{v1 , v2 } is the plane through the origin determined by the two given vectors. The plane has parametric equations x = a + 2b, y = −a − b, and z = 2a + 3b. If a, b, and c are eliminated from the equations, then the resulting Cartesian equation is given by x − y − z = 0. . 266 20. Let v ∈ span{v1 , v2 } and a, b ∈ R. v = av1 + bv2 = a(1, 2, −1) + b(−2, −4, 2) = (a, 2a, −a) + (−2b, −4b, 2b) = (a − 2b, 2a − 4b, −a + 2b) = (a − 2b)(1, 2, −1) = k (1, 2, −1), where k = a − 2b. Thus, span{v1 , v2 } = {v ∈ R3 : v = k (1, 2, −1), k ∈ R}. Geometrically, span{v1 , v2 } is the line through the origin determined by the vector (1, 2, −1). 21. Let v ∈ span{v1 , v2 , v3 } and a, b, c ∈ R. v = av1 + bv2 + cv3 = a(1, 1, −1) + b(2, 1, 3) + c(−2, −2, 2) = (a, a, −a) + (2b, b, 3b) + (−2c, −2c, 2c) = (a + 2b − 2c, a + b − 2c, −a + 3b + 2c). a + 2b − 2c = x a + b − 2c = y Assuming that v = (x, y, z ) and using the last ordered triple, we obtain the system: −a + 3b + 2c = z Performing Gauss-Jordan elimination on the augmented matrix of the system, we obtain: 1 2 −2 x 1 2 −2 1 0 −2 x 2y − x 1 1 −2 y ∼ 0 −1 . x−y 0 y−x ∼ 0 1 0 −1 3 2z 0 5 0 x+z 00 0 5y − 4x + z It is clear from the last matrix that the subspace, S , of R3 is a plane through (0, 0, 0) with Cartesian equation 4x − 5y − z = 0. Moreover, {v1 , v2 } also spans the subspace S since v = av1 + bv2 + cv3 = a(1, 1, −1) + b(2, 1, 3) + c(−2, −2, 2) = a(1, 1, −1) + b(2, 1, 3) − 2c(1, 1, −1) = (a − 2c)(1, 1, −1) + b(2, 1, 3) = dv1 + bv2 where d = a − 2c ∈ R. 22. If v ∈ span{v1 , v2 } then there exist a, b ∈ R such that v = av1 + bv2 = a(1, −1, 2) + b(2, 1, 3) = (a, −a, 2a) + (2b, b, 3b) = (a + 2b, −a + b, 2a + 3b). a + 2b = 3 −a + b = 3 Hence, v = (3, 3, 4) is in span{v1 , v2 } provided there exists a, b ∈ R satisfying the system: 2a + 3b = 4 Solving this system we ﬁnd that a = −1 and b = 2. Consequently, v = −v1 + 2v2 so that (3, 3, 4) ∈ span{v1 , v2 }. 23. If v ∈ span{v1 , v2 } then there exist a, b ∈ R such that v = av1 + bv2 = a(−1, 1, 2) + b(3, 1, −4) = (−a, a, 2a) + (3b, b, −4b) = (−a + 3b, a + b, 2a − 4b). −a + 3b = 5 a+b= 3 Hence v = (5, 3, −6) is in span{v1 , v2 } provided there exists a, b ∈ R satisfying the system: 2a − 4b = −6 Solving this system we ﬁnd that a = 1 and b = 2. Consequently, v = v1 + 2v2 so that (5, 3, −6) ∈ span{v1 , v2 }. 24. If v ∈ span{v1 , v2 } then there exist a, b ∈ R such that v = av1 + bv2 = (3a, a, 2a) + (−2b, −b, b) = (3a − 2b, a − b, 2a + b). 3a − 2b = 1 a−b= 1 Hence v = (1, 1, −2) is in span{v1 , v2 } provided there exists a, b ∈ R satisfying the system: 2a + b = −2 267 1 1 3 −2 1 −1 1 reduces to 0 1 −2 , it follows that the system has no solution. Hence it Since 1 −1 2 1 −2 0 0 2 must be the case that (1, 1, −2) ∈ span{v1 , v2 }. / 25. If p ∈ span{p1 , p2 } then there exist a, b ∈ R such that p(x) = ap1 (x) + bp2 (x), so p(x) = 2x2 − x + 2 is in span{p1 , p2 } provided there exist a, b ∈ R such that 2x2 − x + 2 = a(x − 4) + b(x2 − x + 3) = ax − 4a + bx2 − bx + 3b = bx2 + (a − b)x + (3b − 4a). Equating like coeﬃcients and solving, we ﬁnd that a = 1 and b = 2. Thus, 2x2 − x + 2 = 1 · (x − 4) + 2 · (x2 − x + 3) = p1 (x) + 2p2 (x) so p ∈ span{p1 , p2 }. 26. Let A ∈ span{A1 , A2 , A3 } and c1 , c2 , c3 ∈ R. 1 −1 0 A = c1 A1 + c2 A2 + c3 A3 = c1 + c2 2 0 −2 = c1 2c1 −c1 0 + 0 −2c2 Therefore span{A1 , A2 , A3 } = c2 c2 + 3c3 c3 0 2c3 A ∈ M2 (R) : A = 1 1 + c3 3 1 0 2 c1 + 3c3 −c1 + c2 2c1 − 2c2 + c3 c2 + 2c3 c1 + 3c3 −c1 + c2 . 2c1 − 2c2 + c3 c2 + 2c3 = . 27. Let A ∈ span{A1 , A2 } and a, b ∈ R. 12 −2 1 a 2a −2b b a − 2b 2a + b A = aA1 + bA2 = a +b = + = . −1 3 1 −1 −a 3a b −b −a + b 3a − b a − 2b 2a + b So span{A1 , A2 } = A ∈ M2 (R) : A = . Now, to determine whether B ∈ span{A1 , A2 }, −a + b 3a − b a − 2b 2a + b 31 let = . This implies that a = 1 and b = −1, thus B ∈ span{A1 , A2 }. −a + b 3a − b −2 4 28. (a) The general vector in span{f, g } is of the form h(x) = c1 cosh x + c2 sinh x where c1 , c2 ∈ R. (b) Let h ∈ S and c1 , c2 ∈ R. Then h(x) = c1 f (x) + c2 g (x) = c1 cosh x + c2 sinh x ex − e−x c1 ex c1 e−x c2 ex c2 e−x ex + e−x + c2 = + + − 2 2 2 2 2 2 c1 + c2 x c1 − c2 −x x −x = e+ e = d1 e + d2 e 2 2 c1 + c2 c1 − c2 where d1 = and d2 = . Therefore S =span{ex , e−x }. 2 2 29. The origin in R3 . = c1 30. All points lying on the line through the origin with direction v1 . 31. All points lying on the plane through the origin containing v1 and v2 . 32. If v1 = v2 = 0, then the subspace is the origin in R3 . If at least one of the vectors is nonzero, then the subspace consists of all points lying on the line through the origin in the direction of the nonzero vector. 33. Suppose that S is a subset of S . We must show that every vector in span(S ) also belongs to span(S ). Every vector v that lies in span(S ) can be expressed as v = c1 v1 + c2 v2 + · · · + ck vk , where v1 , v2 , . . . , vk 268 belong to S . However, since S is a subset of S , v1 , v2 , . . . , vk also belong to S , and therefore, v belongs to span(S ). Thus, we have shown that every vector in span(S ) also lies in span(S ). 34. Proof of =⇒: We begin by supposing that span{v1 , v2 , v3 } = span{v1 , v2 }. Since v3 ∈ span{v1 , v2 , v3 }, our supposition implies that v3 ∈ span{v1 , v2 }, which means that v3 can be expressed as a linear combination of the vectors v1 and v2 . Proof of ⇐=: Now suppose that v3 can be expressed as a linear combination of the vectors v1 and v2 . We must show that span{v1 , v2 , v3 } = span{v1 , v2 }, and we do this by showing that each of these subsets is a subset of the other. Since it is clear that span{v1 , v2 } is a subset of span{v1 , v2 , v3 }, we focus our attention on proving that every vector in span{v1 , v2 , v3 } belongs to span{v1 , v2 }. To see this, suppose that v belongs to span{v1 , v2 , v3 }, so that we may write v = c1 v1 + c2 v2 + c3 v3 . By assumption, v3 can be expressed as a linear combination of v1 and v2 , so that we may write v3 = d1 v1 + d2 v2 . Hence, we obtain v = c1 v1 + c2 v2 + c3 v3 = c1 v1 + c2 v2 + c3 (d1 v1 + d2 v2 ) = (c1 + c3 d1 )v1 + (c2 + c3 d2 )v2 ∈ span{v1 , v2 }, as required. Solutions to Section 4.5 1. FALSE. For instance, consider the vector space V = R2 . Here are two diﬀerent minimal spanning sets for V : {(1, 0), (0, 1)} and {(1, 0), (1, 1)}. Many other examples of this abound. 2. TRUE. We have seven column vectors, and each of them belongs to R5 . Therefore, the number of vectors present exceeds the number of components in those vectors, and hence they must be linearly dependent. 3. FALSE. For instance, the 7 × 5 zero matrix, 07×5 , does not have linearly independent columns. 4. TRUE. Any linear dependencies within the subset also represent linear dependencies within the original, larger set of vectors. Therefore, if the nonempty subset were linearly dependent, then this would require that the original set is also linearly dependent. In other words, if the original set is linearly independent, then so is the nonempty subset. 5. TRUE. This is stated in Theorem 4.5.21. 6. TRUE. If we can write v = c1 v1 + c2 v2 + · · · + ck vk , then {v, v1 , v2 , . . . , vk } is a linearly dependent set. 7. TRUE. This is a rephrasing of the statement in True-False Review Question 5 above. 8. FALSE. None of the vectors (1, 0), (0, 1), and (1, 1) in R2 are proportional to each other, and yet, they form a linearly dependent set of vectors. 9. FALSE. The illustration given in part (c) of Example 4.5.22 gives an excellent case-in-point here. Problems: 1. {(1, −1), (1, 1)}. These vectors are elements of R2 . Since there are two vectors, and the dimension of R2 11 is two, Corollary 4.5.15 states that the vectors will be linearly dependent if and only if = 0. Now −1 1 11 = 2 = 0. Consequently, the given vectors are linearly independent. −1 1 269 2. {(2, −1), (3, 2), (0, 1)}. These vectors are elements of R2 , but since there are three vectors, the vectors are linearly dependent by Corollary 4.5.15. Let v1 = (2, −1), v2 = (3, 2), v3 = (0, 1). We now determine a dependency relationship. The condition c1 v1 + c2 v2 + c3 v3 = 0 requires 2c1 + 3c2 = 0 and −c1 + 2c2 + c3 = 0. 1 0 −3 7 2 01 7 Hence the system has solution c1 = 3r, c2 = −2r, c3 = 7r. Consequently, 3v1 − 2v2 + 7v3 = 0. The REDUCED ROW ECHELON FORM of the augmented matrix of this system is 0 0 . 3. {(1, −1, 0), (0, 1, −1), (1, 1, 1)}. These vectors are elements of R3 . 1 01 −1 1 1 = 3 = 0, so by Corollary 4.5.15, the vectors are linearly independent. 0 −1 1 4. {(1, 2, 3), (1, −1, 2), (1, −4, 1)}. These vectors are elements of R3 . 1 1 1 2 −1 4 = 0, so by Corollary 4.5.15, the vectors are linearly dependent. Let v1 = (1, 2, 3), v2 = 3 2 1 (1, −1, 2), v3 = (1, −4, 1). We determine a dependency relation. The condition c1 v1 + c2 v2 + c3 v3 = 0 requires c1 + c2 + c3 = 0, 2c1 − c2 − 4c3 = 0, 3c1 + 2c2 + c3 = 0. 1 0 −1 0 2 0 . The REDUCED ROW ECHELON FORM of the augmented matrix of this system is 0 1 00 00 Hence c1 = r, c2 = −2r, c3 = r, and so v1 − 2v2 + v3 = 0. 5. Given {(−2, 4, −6), (3, −6, 9)}. The vectors are linearly dependent because 3(−2, 4, −6) + 2(3, −6, 9) = (0, 0, 0), which gives a linear dependency relation. Alternatively, let a, b ∈ R and observe that a(−2, 4, −6) + b(3, −6, 9) = (0, 0, 0) =⇒ (−2a, 4a, −6a) + (3b, −6b, 9b) = (0, 0, 0). −2a + 3b = 0 4a − 6b = 0 The last equality results in the system: −6a + 9b = 0 3 1 −2 0 0 0 , which The REDUCED ROW ECHELON FORM of the augmented matrix of this system is 0 0 00 3 implies that a = b. Thus, the given set of vectors is linearly dependent. 2 6. {(1, −1, 2), (2, 1, 0)}. Let a, b ∈ R. a(1, −1, 2) + b(2, 1, 0) = (0, 0, 0) =⇒ (a, −a, 2a) + (2b, b, 0) = (0, 0, 0) =⇒ (a + 2b, −a + b, 2a) = (0, 0, 0). The a + 2b = 0 −a + b = 0 . Since the only solution of the system is a = b = 0, it last equality results in the system: 2a = 0 follows that the vectors are linearly independent. 270 7. {(−1, 1, 2), (0, 2, −1), (3, 1, 2), (−1, −1, 1)}. These vectors are elements of R3 . Since there are four vectors, it follows from Corollary 4.5.15 that the vectors are linearly dependent. Let v1 = (−1, 1, 2), v2 = (0, 2, −1), v3 = (3, 1, 2), v4 = (−1, −1, 1). Then c1 v1 + c2 v2 + c3 v3 + c4 v4 = 0 requires −c1 + 3c3 − c4 = 0, c1 + 2c2 + c3 − c4 = 0, 2c1 − c2 + 2c3 + c4 = 0. 2 100 5 The REDUCED ROW ECHELON FORM of the augmented matrix of this system is 0 1 0 − 3 5 0 0 1 −1 5 so that c1 = −2r, c2 = 3r, c3 = r, c4 = 5r. Hence, −2v1 + 3v2 + v3 + 5v4 = 0. 0 0 0 8. {(1, −1, 2, 3), (2, −1, 1, −1), (−1, 1, 1, 1)}. Let a, b, c ∈ R. a(1, −1, 2, 3) + b(2, −1, 1, −1) + c(−1, 1, 1, 1) = (0, 0, 0, 0) =⇒ (a, −a, 2a, 3a) + (2b, −b, b, −b) + (−c, c, c, c) = (0, 0, 0, 0) =⇒ (a + 2b − c, −a − b + c, 2a + b + c, 3a − b + c) = (0, 0, 0, 0). a + 2b − c = 0 −a − b + c = 0 The last equality results in the system: . The REDUCED ROW ECHELON FORM of the 2a + b + c = 0 3a − b + c = 0 1000 0 1 0 0 augmented matrix of this system is 0 0 1 0 . Consequently, a = b = c = 0. Thus, the given set of 0000 vectors is linearly independent. 9. {(2, −1, 0, 1), (1, 0, −1, 2), (0, 3, 1, 2), (−1, 1, 2, 1)}. These vectors are elements in R4 . By CET (row 1), we obtain: 2 1 0 −1 031 −1 3 1 −1 03 −1 03 1 012+ 0 −1 1 = 21 = 0. = 2 −1 1 2 − 0 −1 1 2 221 121 1 22 1 22 1 Thus, it follows from Corollary 4.5.15 that the vectors are linearly independent. 147 2 5 8 = 0, the given set of vectors is linearly dependent in R3 . Further, since the vectors 369 are not collinear, it follows that span{v1 , v2 , v3 } is a plane 3-space. 10. Since 11. (a) Since 2 1 −3 −1 3 −9 5 −4 12 = 0, the given set of vectors is linearly dependent. (b) By inspection, we see that v3 = −3v2 . Hence v2 and v3 are collinear and therefore span{v2 , v3 } is the line through the origin that has the direction of v2 . Further, since v1 is not proportional to either of these vectors, it does not lie along the same line, hence v1 is not in span{v2 , v3 }. 271 10 1 101 2 k−1 1 2 k = 0 2 k−1 = = (3 − k )(k + 4). Now, the determinant is zero when k 6−k 0 k 6−k kk6 k = 3 or k = −4. Consequently, by Corollary 4.5.15, the vectors are linearly dependent if and only if k = 3 or k = −4. 12. 13. {(1, 0, 1, k ), (−1, 0, k, 1), (2, 0, 1, 3)}. These vectors are elements in R4 . Let a, b, c ∈ R. a(1, 0, 1, k ) + b(−1, 0, k, 1) + c(2, 0, 1, 3) = (0, 0, 0, 0) =⇒ (a, 0, a, ka) + (−b, 0, kb, b) + (2c, 0, c, 3c) = (0, 0, 0, 0) =⇒ (a − b + 2c, 0, a + kb + c, ka + b + 3c) (0, 0, 0, 0). = a − b + 2c = 0 a + kb + c = 0 . Evaluating the determinant of the coeﬃcient The last equality results in the system: ka + b + 3c = 0 matrix, we obtain 1 1 k −1 2 k1 13 1 1 k = 0 k+1 k+1 0 −1 3 − 2k = 2(k + 1)(2 − k ). Consequently, the system has only the trivial solution, hence the given set of vectors are linearly independent if and only if k = 2, −1. 14. {(1, 1, 0, −1), (1, k, 1, 1), (2, 1, k, 1), (−1, 1, 1, k )}. These vectors are elements in R4 . Let a, b, c, d ∈ R. a(1, 1, 0, −1) + b(1, k, 1, 1) + c(2, 1, k, 1) + d(−1, 1, 1, k ) = (0, 0, 0, 0) =⇒ (a, a, 0, −a) + (b, kb, b, b) + (2c, c, kc, c) + (−d, d, d, kd) = (0, 0, 0, 0) =⇒ (a + b + 2c − d, a + kb + c + d, b + kc + d, −a + b + c + kd) = (0, 0, 0, 0). a + b + 2c − d = 0 a + kb + c + d = 0 The last equality results in the system: . By Corollary 3.2.5, this system has the b + kc + d = 0 −a + b + c + kd = 0 trivial solution if and only if the determinant of the coeﬃcient matrix is zero. Evaluating the determinant of the coeﬃcient matrix, we obtain: 1 1 2 −1 1 1 2 −1 k − 1 −1 2 0 −k 2 + k − 1 3 − k 1k1 1 0 k − 1 −1 2 1 k 1 k 1 = = =1 01k 1 0 1 k 1 2 3 k−1 0 3 − 2k k−3 −1 1 1 k 0 2 3 k−1 k2 − k + 1 k − 3 k2 − k + 1 1 = (k − 3) = (k − 3)(k − 1)(k + 2). 3 − 2k k−3 3 − 2k 1 For the original set of vectors to be linearly independent, we need a = b = c = 0. We see that this condition will be true provided that k ∈ R such that k = 3, 1, −2. = 15. Let a, b, c ∈ R. a =⇒ 1 0 1 1 a + 2b + 3c a − b + 6c 0 a + b + 4c +b = 2 −1 0 1 0 0 0 0 +c 3 0 6 4 = 0 0 0 0 a + 2b + 3c = 0 a − b + 6c = 0 . . The last equation results in the system: a + b + 4c = 0 272 10 50 The REDUCED ROW ECHELON FORM of the augmented matrix of this system is 0 1 −1 0 , 00 00 which implies that the system has an inﬁnite number of solutions. Consequently, the given matrices are linearly dependent. 16. Let a, b ∈ R. a 2 −1 3 4 +b −1 1 2 3 = 0 0 0 0 2a − b = 0 3a + b = 0 2a − b −a + 2b 00 =⇒ = . The last equation results in the system: . This system 3a + b 4a + 3b 00 −a + 2b = 0 4a + 3b = 0 has only the trivial solution a = b = 0, thus the given matrices are linearly independent in M2 (R). 17. Let a, b, c ∈ R. a 1 1 0 2 +b −1 2 1 1 +c 2 5 1 7 = 0 0 0 0 a − b + 2c = 0 a + 2b + 5c = 0 a − b + 2c b+c 00 . =⇒ = . The last equation results in the system: a + 2b + 5c 2a + b + 7c 00 2a + b + 7c = 0 b c = 0 + 1030 0 1 1 0 , which The REDUCED ROW ECHELON FORM of the augmented matrix of this system is 0 0 0 0 0000 implies that the system has an inﬁnite number of solutions. Consequently, the given matrices are linearly dependent. 18. Let a, b ∈ R. ap1 (x) + bp2 (x) = 0 =⇒ a(1 − x) + b(1 + x) = 0 =⇒ (a + b) + (−a + b)x = 0. a+b=0 Equating like coeﬃcients, we obtain the system: . Since the only solution to this system is −a + b = 0 a = b = 0, it follows that the given vectors are linearly independent. 19. Let a, b ∈ R. ap1 (x) + bp2 (x) = 0 =⇒ a(2 + 3x) + b(4 + 6x) = 0 =⇒ (2a + 4b) + (3a + 6b)x = 0. 2a + 4b = 0 Equating like coeﬃcients, we obtain the system: . The REDUCED ROW ECHELON FORM 3a + 6b = 0 120 of the augmented matrix of this system is , which implies that the system has an inﬁnite number 000 of solutions. Thus, the given vectors are linearly dependent. 20. Let c1 , c2 ∈ R. c1 p1 (x) + c2 p2 (x) = 0 =⇒ c1 (a + bx) + c2 (c + dx) = 0 =⇒ (ac1 + cc2 ) + (bc1 + dc2 )x = 0. ac1 + cc2 = 0 Equating like coeﬃcients, we obtain the system: . The determinant of the matrix of coefbc1 + dc2 = 0 ac ﬁcients is = ad − bc. Consequently, the system has just the trivial solution, and hence p1 (x) and bd p2 (x) are linearly independent if and only if ad − bc = 0. 21. Since cos 2x = cos2 x − sin2 x, f1 (x) = f3 (x) − f2 (x) so it follows that f1 , f2 , and f3 are linearly dependent in C ∞ (−∞, ∞). 273 5 22. Let v1 = (1, 2, 3), v2 = (−3, 4, 5), v3 = (1, − 4 , − 3 ). By inspection, we see that v2 = −3v3 . Further, 3 since v1 and v2 are not proportional they are linearly independent. Consequently, {v1 , v2 } is a linearly independent set of vectors and span{v1 , v2 } =span{v1 , v2 , v3 }. 23. Let v1 = (3, 1, 5), v2 = (0, 0, 0), v3 = (1, 2, −1), v4 = (−1, 2, 3). Since v2 = 0, it is certainly true that span{v1 , v3 v4 } =span{v1 , v2 , v3 v4 }. Further, since det[v1 , v3 , v4 ] = 42 = 0, {v1 , v3 , v4 } is a linearly independent set. 24. Since we have four vectors in R3 , the given set is linearly dependent. We could determine the speciﬁc linear dependency between the vectors to ﬁnd a linearly independent subset, but in this case, if we just take any 1 13 three of the vectors, say (1, −1, 1), (1, −3, 1), (3, 1, 2), then −1 −3 1 = 2 = 0, so that these vectors are 1 12 linearly independent. Consequently, span{(1, −1, 1), (1, −3, 1), (3, 1, 2)} = span{(1, 1, 1), (1, −1, 1), (1, −3, 1), (3, 1, 2)}. 25. Let v1 = (1, 1, −1, 1), v2 = (2, −1, 3, 1), v3 = (1, 1, 2, 1), v4 = (2, −1, 2, 1). 1 21 2 1 −1 1 −1 = 0, the set {v1 , v2 , v3 , v4 } is linearly dependent. We now determine the linearly Since −1 32 2 1 11 1 dependent relationship. The REDUCED ROW ECHELON FORM of the augmented matrix corresponding to the system 100 0 1 0 is 0 0 1 000 that a linearly c1 (1, 1, −1, 1) + c2 (2, −1, 3, 1) + c3 (1, 1, 2, 1) + c4 (2, −1, 2, 1) = (0, 0, 0, 0) 1 0 3 1 0 , so that c1 = −r, c2 = −3r, c3 = r, c4 = 3r, where r is a free variable. It follows 1 −3 0 00 dependent relationship between the given set of vectors is −v1 − 3v2 + v3 + 3v4 = 0 so that v1 = −3v2 + v3 + 3v4 . Consequently, span{v2 , v3 , v4 } = span{v1 , v2 , v3 , v4 }, and {v2 , v3 , v4 } is a linearly independent set. 26. Let A1 = 1 3 2 4 , A2 = −1 5 2 7 , A3 = 3 1 2 1 . Then c1 A1 + c2 A2 + c3 A3 = 02 requires that c1 − c2 + 3c3 = 0, 2c1 + 2c2 + 2c3 = 0, 3c1 + 5c2 + c3 = 0, 4c1 + 7c2 + c3 = 0. 10 20 0 1 −1 0 . The REDUCED ROW ECHELON FORM of the augmented matrix of this system is 0 0 0 0 00 00 Consequently, the matrices are linearly dependent. Solving the system gives c1 = −2r, c2 = c3 = r. Hence, a linearly dependent relationship is −2A1 + A2 + A3 = 02 . 274 27. We ﬁrst determine whether the given set of polynomials is linearly dependent. Let p1 (x) = 2 − 5x, p2 (x) = 3 + 7x, p3 (x) = 4 − x. Then c1 (2 − 5x) + c2 (3 + 7x) + c3 (4 − x) = 0 requires 2c1 + 3c2 + 4c3 = 0 and − 5c1 + 7c2 − c3 = 0. This system has solution (−31r, −18r, 29r), where r is a free variable. Consequently, the given set of polynomials is linearly dependent, and a linearly dependent relationship is −31p1 (x) − 18p2 (x) + 29p3 (x) = 0, or equivalently, 1 [31p1 (x) + 18p2 (x)]. 29 Hence, the linearly independent set of vectors {2 − 5x, 3 + 7x} spans the same subspace of P1 as that spanned by {2 − 5x, 3 + 7x, 4 − x}. p3 (x) = 28. We ﬁrst determine whether the given set of polynomials is linearly dependent. Let p1 (x) = 2 + x2 , p2 (x) = 4 − 2x + 3x2 , p3 (x) = 1 + x. Then c1 (2 + x2 ) + c2 (4 − 2x + 3x2 ) + c3 (1 + x) = 0 leads to the system 2c1 + 4c2 + c3 = 0, −2c2 + c3 = 0, c1 + 3c2 = 0. This system has solution (−3r, r, 2r) where r is a free variable. Consequently, the given set of vectors is linearly dependent, and a speciﬁc linear relationship is −3p1 (x) + p2 (x) + 2p3 (x) = 0, or equivalently, p2 (x) = 3p1 (x) − 2p3 (x). Hence, the linearly independent set of vectors {2 + x2 , 1 + x} spans the same subspace of P2 as that spanned by the given set of vectors. 1 x x2 29. W [f1 , f2 , f3 ](x) = 0 1 2x 002 linearly independent on I . = 2. Since W [f1 , f2 , f3 ](x) = 0 on I , it follows that the functions are 30. W [f1 , f2 , f3 ](x) = sin x cos x tan x cos x − sin x sec2 x − sin x − cos x 2 tan x sec2 x = sin x cos x tan x cos x − sin x sec2 x 0 0 tan x(2 sec2 x + 1) = − tan x(2 sec2 x + 1). W [f1 , f2 , f3 ](x) is not always zero over I , so the vectors are linearly independent by Theorem 4.5.21 275 1 3x x2 − 1 2x = 31. W [f1 , f2 , f3 ](x) = 0 3 00 2 independent set on I by Theorem 4.5.21. 3 0 2x 2 = 6 = 0 on I . Consequently, {f1 , f2 , f3 } is a linearly 10 0 11 1 e2x e3x e−x 2e2x 3e3x −e−x = e4x 2 3 −1 = e4x 2 1 −3 32. W [f1 , f2 , f3 ](x) = 4 5 −3 49 1 4e2x 9e3x e−x Wronskian is never zero, the functions are linearly independent on (−∞, ∞). = 12e4x . Since the 33. On [0, ∞), f2 = 7f1 , so that the functions are linearly dependent on this interval. Therefore W [f1 , f2 ](x) = 3x3 7x2 = −21x4 = 0. Since the Wronskian is 0 for x ∈ [0, ∞). However, on (−∞, 0), W [f1 , f2 ](x) = 9x2 14x not zero for all x ∈ (−∞, ∞), the functions are linearly independent on that interval. 1 x 2x − 1 01 2 00 0 are linearly dependent on (−∞, ∞). 34. W [f1 , f2 , f3 ](x) = = 0. By inspection, we see that f3 = 2f2 − f1 , so that the functions 35. We show that the Wronskian (the determinant can be computed by cofactor expansion along the ﬁrst row) is identically zero: ex ex ex e−x −e−x e−x cosh x sinh x cosh x = −(cosh x + sinh x) − (cosh x − sinh x) + 2 cosh x = 0. Thus, the Wronskian is identically zero on (−∞, ∞). Furthermore, {f1 , f2 , f3 } is a linearly dependent set because 1 1 1 1 1 1 ex + e−x − f1 (x) − f2 (x) + f3 (x) = − ex − e−x + cosh x = − ex − e−x + = 0 for all x ∈ I. 2 2 2 2 2 2 2 36. We show that the Wronskian is identically zero for f1 (x) = ax3 and f2 (x) = bx3 , which covers the functions in this problem as a special case: ax3 3ax2 bx3 3bx2 = 3abx5 − 3abx5 = 0. Next, let a, b ∈ R. If x ≥ 0, then af1 (x) + bf2 (x) = 0 =⇒ 2ax3 + 5bx3 = 0 =⇒ (2a + 5b)x3 = 0 =⇒ 2a + 5b = 0. If x ≤ 0, then af1 (x) + bf2 (x) = 0 =⇒ 2ax3 − 3bx3 = 0 =⇒ (2a − 3b)x3 = 0 =⇒ 2a − 3b = 0. Solving the resulting system, we obtain a = b = 0; therefore, {f1 , f2 } is a linearly independent set of vectors on (−∞, ∞). 37. (a) When x > 0, f2 (x) = 1 and when x < 0, f2 (x) = −1; thus f2 (0) does not exist, which implies that f2 ∈ C 1 (−∞, ∞). / 276 (b) Let a, b ∈ R. On the interval (−∞, 0), ax + b(−x) = 0, which has more than the trivial solution for a and b. Thus, {f1 , f2 } is a linearly dependent set of vectors on (−∞, 0). On the interval [0, ∞), ax + bx = 0 =⇒ a + b = 0, which has more than the trivial solution for a and b. Therefore {f1 , f2 } is a linearly dependent set of vectors on [0, ∞). a−b=0 On the interval (−∞, ∞), a and b must satisfy: ax + b(−x) = 0 and ax + bx = 0, that is, . Since a+b=0 this system has only a = b = 0 as its solution, {f1 , f2 } is a linearly independent set of vectors on (−∞, ∞). y y = f2(x) = -x f1(x) = - f2(x) on (-∞, 0) y = f1(x) = x = f2(x) f1(x) = f2(x) on (0, ∞) x y = f1(x) = x Figure 64: Figure for Problem 37 38. Let a, b ∈ R. ax + bx = 0 if x = 0 (a + b)x = 0 =⇒ a(0) + b(1) = 0 if x = 0 b=0 =⇒ a = 0 and b = 0 so {f1 , f2 } is a linearly independent set on I . af1 (x) + bf2 (x) = 0 =⇒ 39. Let a, b, c ∈ R and x ∈ (−∞, ∞). af1 (x) + bf2 (x) + cf3 (x) = 0 =⇒ a(x − 1) + b(2x) + c(3) = 0 for x ≥ 1 2a(x − 1) + b(2x) + c(3) = 0 for x < 1 (a + 2b)x + (−a + 3c) = 0 a + 2b = 0 and − a + 3c = 0 =⇒ (2a + 2b)x + (−2a + 3c) = 0 2a + 2b = 0 and − 2a + 3c = 0. Since the only solution to this system of equations is a = b = c = 0, it follows that the given functions are linearly independent on (−∞, ∞). The domain space may be divided into three types of intervals: (1) interval subsets of (−∞, 1), (2) interval subset of [1, ∞), (3) intervals containing 1 where 1 is not an endpoint of the intervals. =⇒ For intervals of type (3): Intervals such as type (3) are treated as above [with domain space of (−∞, ∞)]: vectors are linearly independent. For intervals of type (1): a(2(x − 1)) + b(2x) + c(3) = 0 =⇒ (2a + 2b)x + (−2a + 3c) = 0 =⇒ 2a + 2b = 0, and −2a + 3c = 0. Since this system has three variables with only two equations, the solution to the system is not unique, hence intervals of type (1) result in linearly dependent vectors. For intervals of type (2): 277 a(x − 1)+ b(2x)+ c(3) = 0 =⇒ a +2b = 0 and −a +3c = 0. As in the last case, this system has three variables with only two equations, so it must be the case that intervals of type (2) result in linearly dependent vectors. 40. (a) Let f0 (x) = 1, f1 (x) = x, f2 (x) = x2 , f3 (x) = x3 . 1 x x2 x3 0 1 2x 3x2 = 12 = 0. Hence, {f1 , f2 , f3 , f4 } is linearly independent on Then W [f1 , f2 , f3 , f4 ](x) = 002 6x 000 6 any interval. 1 x x2 · · · xn 0 1 2x · · · nxn−1 n−2 . (b) W [f0 , f1 , f2 , . . . , fn ](x) = 0 0 2 · · · n(n − 1)x .. . . .. . . .. . . 0 0 0 ··· n! The matrix corresponding to this determinant is upper triangular, so the value of the determinant is given by the product of all of the diagonal entries. W [f0 , f1 , f2 , . . . , fn ](x) = 1 · 1 · 2 · 6 · 24 · · · n!, which is not zero regardless of the actual domain of x. Consequently, the functions are linearly independent on any interval. 41. (a) Let f1 (x) = er1 x , f2 (x) = er2 x and f3 (x) = er3 x . Then er1 x er2 x er3 x 1 r1 x r2 x r2 e r3 er3 x = er1 x er2 x er3 x r1 W [f1 , f2 , f3 ](x) = r1 e 2 2 2 2 r1 r1 er1 x r2 er2 x r3 er3 x 1 r2 2 r2 1 r3 2 r3 = e(r1 +r2 +r3 )x (r3 − r1 )(r3 − r2 )(r2 − r1 ). If ri = rj for i = j , then W [f1 , f2 , f3 ](x) is never zero, and hence the functions are linearly independent on any interval. If, on the other hand, ri = rj with i = j , then fi − fj = 0, so that the functions are linearly dependent. Thus, r1 , r2 , r3 must all be diﬀerent in order that f1 , f2 , f3 are linearly independent. (b) W [f1 , f2 , f3 , . . . , fn ](x) = er1 x r1 er1 x 2 r1 er1 x . . . n r1 −1 er1 x er2 x · · · r2 er2 x · · · 2 r2 er2 x · · · . . . n r2 −1 er2 x ··· ern x rn ern x 2 rn ern x . . . n rn−1 ern x 1 r2 2 r2 . . . ··· ··· ··· 1 rn 2 rn . . . n r1 −1 = er1 x er2 x · · · ern x 1 r1 2 r1 . . . n r2 −1 ··· n rn−1 ∗ = er1 x er2 x · · · ern x V (r1 , r2 , . . . , rn ) = er1 x er2 x · · · ern x (rm − ri ). 1≤i<m≤n Now from the last equality, we see that if ri = rj for i = j , where i, j ∈ {1, 2, 3, . . . , n}, then W [f1 , f2 , . . . , fn ](x) is never zero, and hence the functions are linearly independent on any interval. If, on the other hand, ri = rj with i = j , then f1 − fj = 0, so that the functions are linearly dependent. Thus {f1 , f2 , . . . , fn } is a linearly independent set if and only if all the ri are distinct for i ∈ {1, 2, 3, . . . , n}. 278 (*Note: V (r1 , r2 , . . . , rn ) is the n × n Vandermonde determinant. See Section 3.3 Problem 21). 42. Let a, b ∈ R. Assume that av + bw = 0. Then a(αv1 + v2 ) + b(v1 + αv2 ) = 0 which implies that (αa + b)v1 + (a + bα)v2 = 0. Now since it is given that v1 and v2 are linearly independent, αa + b = 0 and α1 = 0. That is, if and a + bα = 0. This system has only the trivial solution for a and b if and only if 1α only if α2 − 1 = 0, or α = ±1. Therefore, the vectors are linearly independent provided that α = ±1. 43. It is given that v1 and v2 are linearly independent. Let u1 = a1 v1 + b1 v2 , u2 = a2 v1 + b2 v2 , and u3 = a3 v1 + b3 v2 where a1 , a2 , a3 , b1 , b2 , b3 ∈ R. Let c1 , c2 , c3 ∈ R. Then c1 u1 + c2 u2 + c3 u3 = 0 =⇒ c1 (a1 v1 + b1 v2 ) + c2 (a2 v1 + b2 v2 ) + c3 (a3 v1 + b3 v2 ) = 0 =⇒ (c1 a1 + c2 a2 + c3 a3 )v1 + (c1 b1 + c2 b2 + c3 b3 )v2 = 0. c1 a1 + c2 a2 + c3 a3 = 0 Now since v1 and v2 are linearly independent: There are an inﬁnite number c1 b1 + c2 b2 + c3 b3 = 0. of solutions to this homogeneous system since there are three unknowns but only two equations. Hence, {u1 , u2 , u3 } is a linearly dependent set of vectors. 44. Given that {v1 , v2 , . . . , vm } is a linearly independent set of vectors in a vector space V , and m aik vi , k ∈ {1, 2, . . . , n}. uk = (44.1) i=1 n (a) Let ck ∈ R, k ∈ {1, 2, . . . , n}. Using system (44.1) and ck uk = 0, we obtain: k=1 n m m ck k=1 i=1 n i=1 aik vi k=1 = 0 ⇐⇒ aik ck vi = 0. Since the vi for each i ∈ {1, 2, . . . , m} are linearly independent, n aik ck = 0, 1 ≤ i ≤ m. (44.2) k=1 But this is a system of m equations with n unknowns c1 , c2 , . . . , cn . Since n > m, the system has more unknowns than equations, and so has nontrivial solutions. Thus, {u1 , u2 , . . . , un } is a linearly dependent set. (b) If m = n, then the system (44.2) has a trivial solution ⇐⇒ the coeﬃcient matrix of the system is invertible ⇐⇒ det[aij ] = 0. (c) If n < m, then the homogeneous system (44.2) has just the trivial solution if and only if rank(A) = n. Recall that for a homogeneous system, rank(A# ) = rank(A). (d) Corollary 4.5.15. 45. We assume that c1 (Av1 ) + c2 (Av2 ) + · · · + cn (Avn ) = 0. Our aim is to show that c1 = c2 = · · · = cn = 0. We manipulate the left side of the above equation as 279 follows: c1 (Av1 ) + c2 (Av2 ) + · · · + cn (Avn ) = 0 A(c1 v1 ) + A(c2 v2 ) + · · · + A(cn vn ) = 0 A(c1 v1 + c2 v2 + · · · + cn vn ) = 0. Since A is invertible, we can left multiply the last equation by A−1 to obtain c1 v1 + c2 v2 + · · · + cn vn = 0. Since {v1 , v2 , . . . , vn } is linearly independent, we can now conclude directly that c1 = c2 = · · · = cn = 0, as required. 46. Assume that c1 v1 + c2 v2 + c3 v3 = 0. We must show that c1 = c2 = c3 = 0. Let us suppose for the moment that c3 = 0. In that case, we can solve the above equation for v3 : v3 = − c1 c2 v1 − v2 . c3 c3 However, this contradicts the assumption that v3 does not belong to span{v1 , v2 }. Therefore, we conclude that c3 = 0. Our starting equation therefore reduces to c1 v1 + c2 v2 = 0. Now the assumption that {v1 , v2 } is linearly independent shows that c1 = c2 = 0. Therefore, c1 = c2 = c3 = 0, as required. 47. Assume that c1 v1 + c2 v2 + · · · + ck vk + ck+1 vk+1 = 0. We must show that c1 = c2 = · · · = ck+1 = 0. Let us suppose for the moment that ck+1 = 0. In that case, we can solve the above equation for vk+1 : vk+1 = − c1 c2 ck v1 − v2 − · · · − vk . ck+1 ck+1 ck+1 However, this contradicts the assumption that vk+1 does not belong to span{v1 , v2 , . . . , vk }. Therefore, we conclude that ck+1 = 0. Our starting equation therefore reduces to c1 v1 + c2 v2 + · · · + ck vk = 0. Now the assumption that {v1 , v2 , . . . , vk } is linearly independent shows that c1 = c2 = · · · = ck = 0. Therefore, c1 = c2 = · · · = ck = ck+1 = 0, as required. 48. Let {v1 , v2 , . . . , vk } be a set of vectors with k ≥ 2. Suppose that vk can be expressed as a linear combination of {v1 , v2 , . . . , vk−1 }. We claim that span{v1 , v2 , . . . , vk } = span{v1 , v2 , . . . , vk−1 }. Since every vector belonging to span{v1 , v2 , . . . , vk−1 } evidently belongs to span{v1 , v2 , . . . , vk }, we focus on showing that every vector in span{v1 , v2 , . . . , vk } also belongs to span{v1 , v2 , . . . , vk−1 }: Let v ∈ span{v1 , v2 , . . . , vk }. We therefore may write v = c1 v1 + c2 v2 + · · · + ck vk . By assumption, we may write vk = d1 v1 + d2 v2 + · · · + dk−1 vk−1 . Therefore, we obtain v = c1 v1 + c2 v2 + · · · + ck vk = c1 v1 + c2 v2 + · · · + ck−1 vk−1 + ck (d1 v1 + d2 v2 + · · · + dk−1 vk−1 ) = (c1 + ck d1 )v1 + (c2 + ck d2 )v2 + · · · + (ck−1 + ck dk−1 )vk−1 ∈ span{v1 , v2 , . . . , vk−1 }. This shows that every vector belonging to span{v1 , v2 , . . . , vk } also belongs to span{v1 , v2 , . . . , vk−1 }, as needed. 280 49. We ﬁrst prove part 1 of Proposition 4.5.7. Suppose that we have a set {u, v} of two vectors in a vector space V . If {u, v} is linearly dependent, then we have cu + dv = 0, where c and d are not both zero. Without loss of generality, suppose that c = 0. Then we have d u = − v, c so that u and v are proportional. Conversely, if u and v are proportional, then v = cu for some constant c. Thus, cu − v = 0, which shows that {u, v} is linearly dependent. For part 2 of Proposition 4.5.7, suppose the zero vector 0 belongs to a set S of vectors in a vector space V . Then 1 · 0 is a linear dependency among the vectors in S , and therefore S is linearly dependent. 50. Suppose that {v1 , v2 , . . . , vk } spans V and let v be any vector in V . By assumption, we can write v = c1 v1 + c2 v2 + · · · + ck vk , for some constants c1 , c2 , . . . , ck . Therefore, c1 v1 + c2 v2 + · · · + ck vk − v = 0 is a linear dependency among the vectors in {v, v1 , v2 , . . . , vk }. Thus, {v, v1 , v2 , . . . , vk } is linearly dependent. 51. Let S = {p1 , p2 , . . . , pk } and assume without loss of generality that the polynomials are listed in decreasing order by degree: deg(p1 ) > deg(p2 ) > · · · > deg(pk ). To show that S is linearly independent, assume that c1 p1 + c2 p2 + · · · + ck pk = 0. We wish to show that c1 = c2 = · · · = ck = 0. We require that each coeﬃcient on the left side of the above equation is zero, since we have 0 on the right-hand side. Since p1 has the highest degree, none of the terms c2 p2 ,c3 p3 , . . . , ck pk can cancel the leading coeﬃcient of p1 . Therefore, we conclude that c1 = 0. Thus, we now have c2 p2 + c3 p3 + · · · + ck pk = 0, and we can repeat this argument again now to show successively that c2 = c3 = · · · = ck = 0. Solutions to Section 4.6 1. FALSE. It is not enough that S spans V . It must also be the case that S is linearly independent. 2. FALSE. For example, R2 is not a subspace of R3 , since R2 is not even a subset of R3 . 3. TRUE. Any set of two non-proportional vectors in R2 will form a basis for R2 . 4. FALSE. We have dim[Pn ] = n + 1 and dim[Rn ] = n. 5. FALSE. For example, if V = R2 , then the set S = {(1, 0), (2, 0), (3, 0)}, consisting of 3 > 2 vectors, fails to span V , a 2-dimensional vector space. 281 6. TRUE. We have dim[P3 ] = 4, and so any set of more than four vectors in P3 must be linearly dependent (a maximal linearly independent set in a 4-dimensional vector space consists of four vectors). 7. FALSE. For instance, the two vectors 1 + x and 2 + 2x in P3 are linearly dependent. 8. TRUE. Since M3 (R) is 9-dimensional, any set of 10 vectors in this vector space must be linearly dependent by Theorem 4.6.4. 9. FALSE. Only linearly independent sets with fewer than n vectors can be extended to a basis for V . 10. TRUE. We can build such a subset by choosing vectors from the set as follows. Choose v1 to be any vector in the set. Now choose v2 in the set such that v2 ∈ span{v1 }. Next, choose v3 in the set such that v3 ∈ span{v1 , v2 }. Proceed in this manner until it is no longer possible to ﬁnd a vector in the set that is not spanned by the collection of previously chosen vectors. This point will occur eventually, since V is ﬁnite-dimensional. Moreover, the chosen vectors will form a linearly independent set, since each vi is chosen from outside span{v1 , v2 , . . . , vi−1 }. Thus, the set we obtain in this way is a linearly independent set of vectors that spans V , hence forms a basis for V . 11. FALSE. The set of all 3 × 3 upper triangular matrices forms a 6-dimensional subspace of M3 (R), not a 3-dimensional subspace. One basis is given by {E11 , E12 , E13 , E22 , E23 , E33 }. Problems: 1. dim[R2 ] = 2. There are two vectors, so if they are to form a basis for R2 , they need to be linearly 1 −1 = 2 = 0. This implies that the vectors are linearly independent, hence they form a independent: 1 1 2 basis for R . 2. dim[R3 ] = 3. There are three vectors, so if they are to form a basis for R3 , they need to be linearly 1 3 1 1 = 13 = 0. This implies that the vectors are linearly independent, hence they independent: 2 −1 1 2 −1 form a basis for R3 . 3. dim[R3 ] = 3. There are three vectors, so if they are to form a basis for R3 , they need to be linearly 1 2 3 5 11 = 0. This implies that the vectors are linearly dependent, hence they do not independent: −1 1 −2 −5 form a basis for R3 . 4. dim[R4 ] = 4. We need 4 linearly independent vectors in order to span R4 . However, there are only 3 vectors in this set. Thus, the vectors cannot be a basis for R4 . 5. dim[R4 ] = 4. There are four vectors, so if they 1 2 −1 2 1 2 −1 1 1 1 −1 0 −1 2 independent: = 0 3 1 1 0 3 1 2 −1 −2 2 0 −5 0 Since this determinant is nonzero, the given vectors basis for R4 . are to form a basis for R3 , they need to be linearly 2 −1 2 −3 −7 0 −5 −3 31 1= 31 1 = −11. = 1 −5 0 −2 −5 0 −2 −2 are linearly independent. Consequently, they form a 6. dim[R4 ] = 4. There are four vectors, so if they are to form a basis for R3 , they need to be linearly 282 0 −1 independent: 0 k (k − 1)2 = 0 when k 10k 00k −1 1 0 010 0 1 2 − −1 1 0 = −(−1 + 2k ) − (−k 2 ) = k 2 − 2k + 1 = =− 112 k01 k01 001 = 1. Thus, the vectors will form a basis for R4 provided k = 1. 7. The general vector p(x) ∈ P3 can be represented as p(x) = a0 + a1 x + a2 x2 + a3 x3 . Thus P3 = span{1, x, x2 , x3 }. Further, {1, x, x2 , x3 } is a linearly independent set since W [1, x, x2 , x3 ] = 1 x x2 x3 0 1 2x 3x2 002 6x 000 6 = 12 = 0. Consequently, S = {1, x, x2 , x3 } is a basis for P3 and dim[P3 ] = 4. Of course, S is not the only basis for P3 . 8. Many acceptable bases are possible here. One example is S = {x3 , x3 + 1, x3 + x, x3 + x2 }. All of the polynomials in this set have degree 3. We verify that S is a basis: Assume that c1 (x3 ) + c2 (x3 + 1) + c3 (x3 + x) + c4 (x3 + x2 ) = 0. Thus, (c1 + c2 )x3 + c4 x2 + c3 x + c2 = 0, from which we quickly see that c1 = c2 = c3 = c4 = 0. Thus, S is linearly independent. Since P3 is 4-dimensional, we can now conclude from Corollary 4.6.13 that S is a basis for P3 . 9. Ax = 0 =⇒ 1 3 −2 −6 x1 x2 = 0 0 . The augmented matrix for this system is 1 30 −2 −6 0 ∼ 130 ; thus, x1 + 3x2 = 0, or x1 = −3x2 . Let x2 = r so that x1 = −3r where r ∈ R. Consequently, 000 S = {x ∈ R2 : x = r(−3, 1), r ∈ R} = span{(−3, 1)}. It follows that {(−3, 1)} is a basis for S and dim[S ] = 1. 000 x1 0 10. Ax = 0 =⇒ 0 0 0 x2 = 0 . The REDUCED ROW ECHELON FORM of the aug010 x 0 3 0100 mented matrix for this system is 0 0 0 0 . We see that x2 = 0, and x1 and x3 are free variables: 0000 x1 = r and x2 = s. Hence, (x1 , x2 , x3 ) = (r, 0, s) = r(1, 0, 0) + s(0, 0, 1), so that the solution set of the system is S = {x ∈ R3 : x = r(1, 0, 0) + s(0, 0, 1), r, s ∈ R}. Therefore we see that {(1, 0, 0), (0, 0, 1)} is a basis for S and dim[S ] = 2. 1 −1 4 x1 0 3 −2 x2 = 0 . The REDUCED ROW ECHELON FORM of the 11. Ax = 0 =⇒ 2 1 2 −2 x 0 3 10 20 augmented matrix for this system is 0 1 −2 0 . If we let x3 = r then (x1 , x2 , x3 ) = (−2r, 2r, r) = 00 00 283 r(−2, 2, 1), so that the solution set of the system is S = {x ∈ R3 : x = r(−2, 2, 1), r ∈ R}. Therefore we see that {(−2, 2, 1)} is a basis for S and dim[S ] = 1. x1 0 1 −1 2 3 0 2 −1 3 4 x2 = . The REDUCED ROW ECHELON FORM of the 12. Ax = 0 =⇒ 0 1 0 1 1 x3 x4 0 3 −1 4 5 10 1 10 0 1 −1 −2 0 . If we let x3 = r, x4 = s then x2 = r + 2s and augmented matrix for this system is 0 0 0 0 0 00 0 00 x1 = −r − s. Hence, the solution set of the system is S = {x ∈ R4 : x = r(−1, 1, 1, 0) + s(−1, 2, 0, 1), r, s ∈ R} = span{(−1, 1, 1, 0), (−1, 2, 0, 1)}. Further, the vectors v1 = (−1, 1, 1, 0), v2 = (−1, 2, 0, 1) are linearly independent since c1 v1 + c2 v2 = 0 =⇒ c1 (−1, 1, 1, 0) + c2 (−1, 2, 0, 1) = (0, 0, 0, 0) =⇒ c1 = c2 = 0. Consequently, {(−1, 1, 1, 0), (−1, 2, 0, 1)} is a basis for S and dim[S ] = 2. 13. If we let y = r and z = s where r, s ∈ R, then x = 3r − s. It follows that any ordered triple in S can be written in the form: (x, y, z ) = (3r − s, r, s) = (3r, r, 0) + (−s, 0, s) = r(3, 1, 0) + s(−1, 0, 1), where r, s ∈ R. If we let v1 = (3, 1, 0) and v2 = (−1, 0, 1), then S = {v ∈ R3 : v = r(3, 1, 0) + s(−1, 0, 1), r, s ∈ R} = span{v1 , v2 }; moreover, v1 and v2 are linearly independent for if a, b ∈ R and av1 + bv2 = 0, then a(3, 1, 0) + b(−1, 0, 1) = (0, 0, 0), which implies that (3a, a, 0) + (−b, 0, b) = (0, 0, 0), or (3a − b, a, b) = (0, 0, 0). In other words, a = b = 0. Since {v1 , v2 } spans S and is linearly independent, it is a basis for S . Also, dim[S ] = 2. 14. S = {x ∈ R3 : x = (r, r − 2s, 3s − 5r), r, s ∈ R} = {x ∈ R3 : x = (r, r, −5r) + (0, −2s, 3s), r, s ∈ R} = {x ∈ R3 : x = r(1, 1, −5) + s(0, −2, 3), r, s ∈ R}. Thus, S = span{(1, 1, −5), (0, −2, 3)}. The vectors v1 = (1, 1, −5) and v2 = (0, −2, 3) are linearly independent for if a, b ∈ R and av1 + bv2 = 0, then a(1, 1, −5)+b(0, −2, 3) = (0, 0, 0) =⇒ (a, a, −5a)+(0, −2b, 3b) = (0, 0, 0) =⇒ (a, a−2b, 3b−5a) = (0, 0, 0) =⇒ a = b = 0. It follows that {v1 , v2 } is a basis for S and dim[S ] = 2. ab 0c 15. S = {A ∈ M2 (R) : A = , a, b, c ∈ R}. Each vector in S can be written as 1 0 A=a so that S = span 1 0 0 0 , 0 0 1 0 pendent, it follows that a basis for S is 16. S = {A ∈ M2 (R) : A = a b c −a A=a , 0 0 0 0 +b 00 01 10 00 1 0 0 0 +c 0 1 , . Since the vectors in this set are clearly linearly inde, 0 0 1 0 , 0 0 0 1 , and therefore dim[S ] = 3. , a, b, c ∈ R}. Each vector in S can be written as 1 0 0 −1 +b 0 0 1 0 +c 0 1 0 0 , 284 so that S = span 1 0 0 −1 it follows that a basis for S is , 0 0 1 0 , 0 1 1 0 0 −1 , 0 . Since this set of vectors is clearly linearly independent, 0 01 00 , , and therefore dim[S ] = 3. 00 10 17. We see directly that v3 = 2v1 . Let v be an arbitrary vector in S . Then v = c1 v1 + c2 v2 + c3 v3 = (c1 + 2c3 )v1 + c2 v2 = d1 v1 + d2 v2 , where d1 = (c1 + 2c3 ) and d2 = c2 . Hence S = span{v1 , v2 }. Further, v1 and v2 are linearly independent for if a, b ∈ R and av1 + bv2 = 0, then a(1, 0, 1) + b(0, 1, 1) = (0, 0, 0) =⇒ (a, 0, a) + (0, b, b) = (0, 0, 0) =⇒ (a, b, a + b) = (0, 0, 0) =⇒ a = b = 0. Consequently, {v1 , v2 } is a basis for S and dim[S ] = 2. 18. f3 depends on f1 and f2 since sinh x = f1 (x) − f2 (x) ex − e−x . Thus, f3 (x) = . 2 2 ex e−x = −2 = 0 for all x ∈ R, so {f1 , f2 } is a linearly independent set. Thus, {f1 , f2 } x e −e−x is a basis for S and dim[S ] = 2. W [f1 , f2 ](x) = 19. The given set of matrices is linearly dependent because it contains the zero vector. Consequently, the 13 −1 4 5 −6 matrices A1 = , A2 = , A3 = span the same subspace of M2 (R) as that −1 2 11 −5 1 spanned by the original set. We now determine whether {A1 , A2 , A3 } is linearly independent. The vector equation: c1 A1 + c2 A2 + c3 A3 = 02 leads to the linear system c1 − c2 + 5c3 = 0, 3c1 + 4c2 − 6c3 = 0, −c1 + c2 − 5c3 = 0, 2c1 + c2 + c3 = 0. This system has solution (−2r, 3r, r), where r is a free variable. Consequently, {A1 , A2 , A3 } is linearly dependent with linear dependency relation −2A1 + 3A2 + A3 = 0, or equivalently, A3 = 2A1 − 3A2 . It follows that the set of matrices {A1 , A2 } spans the same subspace of M2 (R) as that spanned by the original set of matrices. Further {A1 , A2 } is linearly independent by inspection, and therefore it is a basis for the subspace. 20. (a) We must show that every vector (x, y ) ∈ R2 can be expressed as a linear combination of v1 and v2 . Mathematically, we express this as c1 (1, 1) + c2 (−1, 1) = (x, y ) which implies that c1 − c2 = x and c1 + c2 = y. Adding the equations here, we obtain 2c1 = x + y or c1 = 1 (x + y ). 2 285 Now we can solve for c2 : 1 (y − x). 2 Therefore, since we were able to solve for c1 and c2 in terms of x and y , we see that the system of equations is consistent for all x and y . Therefore, {v1 , v2 } spans R2 . c2 = y − c1 = (b) 1 −1 1 1 = 2 = 0, so the vectors are linearly independent. (c) We can draw this conclusion from part (a) alone by using Theorem 4.6.12 or from part (b) alone by using Theorem 4.6.10. 21. (a) We must show that every vector (x, y ) ∈ R2 can be expressed as a linear combination of v1 and v2 . Mathematically, we express this as c1 (2, 1) + c2 (3, 1) = (x, y ) which implies that 2c1 + 3c2 = x and c1 + c2 = y. From this, we can solve for c1 and c2 in terms of x and y : c1 = 3y − x and c2 = x − 2y. Therefore, since we were able to solve for c1 and c2 in terms of x and y , we see that the system of equations is consistent for all x and y . Therefore, {v1 , v2 } spans R2 . (b) 2 1 3 1 = −1 = 0, so the vectors are linearly independent. (c) We can draw this conclusion from part (a) alone by using Theorem 4.6.12 or from part (b) alone by using Theorem 4.6.10. 22. dim[R3 ] = 3. There 0 Theorem 4.6.1. Since 6 3 3 is a basis for R . are 3 vectors, so if they are linearly independent, then they are a basis of R3 by 3 6 0 −3 = 81 = 0, the given vectors are linearly independent, and so {v1 , v2 , v3 } 3 0 23. dim[P2 ] = 3. There are 3 vectors, so {p1 , p2 , p3 } may be a basis for P2 depending on α. To be a basis, the set of vectors must be linearly independent. Let a, b, c ∈ R. Then ap1 (x) + bp2 (x) + cp3 (x) = 0 =⇒ a(1 + αx2 ) + b(1 + x + x2 ) + c(2 + x) = 0 =⇒ (a + + 2c) + (b + c)x + (aα + b)x2 = 0. Equating like coeﬃcients in the last equality, we obtain the b a + b + 2c = 0 b + c = 0 Reduce the augmented matrix of this system. system: aα + b = 0. 1 0 α 120 1 1 1 1 0 ∼ 0 1 100 0 α−1 2 1 2α 0 11 2 0 0 ∼ 0 1 1 0 . 0 0 0 α+1 0 For this system to have only the trivial solution, the last row of the matrix must not be a zero row. This means that α = −1. Therefore, for the given set of vectors to be linearly independent (and thus a basis), α can be any value except −1. 286 24. dim[P2 ] = 3. There are 3 vectors, so in order to form a basis for P2 , they must be linearly independent. Since we are dealing with functions, we will use the Wronskian. W [p1 , p2 , p3 ](x) = 1 + x x(x − 1) 1 2x − 1 0 2 1 + 2x2 4x 4 = 1+x 1 0 x(x − 1) 2x − 1 2 1 + 2x 2 0 = −2. Since the Wronskian is nonzero, {p1 , p2 , p3 } is linearly independent, and hence forms a basis for P2 . 2 1 x 3x 2−1 25. W [p0 , p1 , p2 ](x) = 0 1 = 3 = 0, so {p0 , p1 , p2 } is a linearly independent set. Since 3x 00 3 dim[P2 ] = 3, it follows that {p0 , p1 , p2 } is a basis for P2 . 26. (a) Suppose that a, b, c, d ∈ R. −1 1 13 1 a +b +c 01 −1 0 1 0 2 +d 0 −1 2 3 = 02 −a + b + c = 0 a + 3b − d = 0 −a + b + c a + 3b − c The =⇒ = 02 . The last matrix results in the system: −b + c + 2d a + 2c + 3d −b + c + 2d = 0 a + 2c + 3d = 0. 1 −1 −1 00 0 4 1 −1 0 , REDUCED ROW ECHELON FORM of the augmented matrix of this system is 0 0 5 7 0 0 0 0 10 which implies that a = b = c = d = 0. Hence, the given set of vectors is linearly independent. Since dim[M2 (R)] = 4, it follows that {A1 , A2 , A3 , A4 } is a basis for M2 (R). (b) We wish to express the vector 5 7 6 8 =a 5 7 6 8 −1 0 1 1 in the form +b 1 −1 3 0 +c 1 1 0 2 +d 0 −1 2 3 . Matching entries on each side of this equation (upper left, upper right, lower left, and lower right), we obtain a linear system with augmented matrix −1 11 05 1 3 0 −1 6 . 0 −1 1 2 7 1 02 38 Solving this system by Gaussian elimination, we ﬁnd that a = − 34 , b = 12, c = − 55 , and d = 3 3 have 34 −1 1 55 1 0 56 0 −1 56 13 =− + 12 − + . 78 01 −1 0 12 2 3 3 3 3 27. 56 3. Thus, we 287 x 1 1 −1 11 x 5 −6 2 (a) Ax = 0 =⇒ 2 −3 x3 5 0 2 −3 x4 1 1 −1 10 1 1 −1 1 2 −3 5 −6 0 ∼ 0 −5 7 5 0 2 −3 0 0 −5 7 0 = 0 . The augmented matrix for this linear system is 0 10 1 1 −1 10 11 −1 2 3 −8 0 ∼ 0 −5 7 −8 0 ∼ 0 1 −1.4 −8 0 0 0 0 00 00 0 1. A12 (−2), A13 (−5) 2. A23 (−1) 10 1.6 0 . 00 3. M2 (− 1 ) 5 We see that x3 = r and x4 = s are free variables, and therefore nullspace(A) is 2-dimensional. Now we can check directly that Av1 = 0 and Av2 = 0, and since v1 and v2 are linearly independent (they are non-proportional), they therefore form a basis for nullspace(A) by Corollary 4.6.13. (b) An arbitrary vector (x, y, z, w) in nullspace(A) can be expressed as a linear combination of the basis vectors: (x, y, z, w) = c1 (−2, 7, 5, 0) + c2 (3, −8, 0, 5), where c1 , c2 ∈ R. 28. (a) An arbitrary matrix in S takes the form 1 = a 0 −1 0 1 0 −1 0 0 0 + b 0 0 −1 0 1 a b −a − b c d −c − d −a − c −b − d a + b + c + d 0 0 0 00 0 −1 1 −1 . 0 + c 1 0 −1 + d 0 0 −1 1 −1 0 1 1 Therefore, we have the following basis for S : 1 0 −1 0 1 −1 0 00 0 , 0 0 0 , 1 −1 0 1 0 −1 1 −1 0 0 0 0 0 0 −1 , 0 1 −1 . 0 1 0 −1 1 From this, we see that dim[S ] = 4. (b) We see that each of the matrices 100 01 0 0 0 , 0 0 000 00 0 0 0 , 0 0 0 0 0 0 1 0 0 , 1 0 0 0 0 0 0 0 0 , 0 0 1 0 0 0 0 0 0 has a diﬀerent row or column that does not sum to zero, and thus none of these matrices belong to S , and they are linearly independent from one another. Therefore, supplementing the basis for S in part (a) with the ﬁve matrices here extends the basis in (a) to a basis for M3 (R). 29. (a) An arbitrary matrix in S takes 1 = a 0 0 0 0 1 0 0 1 + b 0 0 1 1 0 0 a b c d e a+b+c−d−e the form b+c−d a+c−e d+e−c 0 00 1 00 0 0 0 0 1 + c 0 0 1 + d 1 0 −1 + e 0 1 −1 . 0 1 1 −1 −1 0 1 0 −1 1 288 Therefore, we have the following 0 100 0 0 1 , 0 010 1 basis for S : 10 0 0 1 , 0 00 1 0 1 0 0 1 , 1 1 −1 −1 0 0 0 0 0 0 −1 , 0 1 −1 . 0 1 0 −1 1 From this, we see that dim[S ] = 5. (b) We must include four additional matrices that are linearly independent and outside of S . The matrices E11 , E12 , E11 + E13 , and E22 will suﬃce in this case. ab ∈ Sym2 (R); a, b, c ∈ R. This vector can be represented as bc 10 01 00 : a linear combination of , , 00 10 01 10 01 00 10 01 00 A=a +b +c . Since , , is a linearly independent 00 10 01 00 10 01 set that also spans Sym2 (R), it is a basis for Sym2 (R). Thus, dim[Sym2 (R)] = 3. Let B ∈ Skew2 (R) so B = 0b 01 01 −B T . Then B = =b , where b ∈ R. The set is linearly independent −b 0 −1 0 −1 0 01 and spans Skew2 (R) so that a basis for Skew2 (R) is . Consequently, dim[Skew2 (R)] = 1. Now, −1 0 dim[Sym2 (R)] = 4, and hence dim[Sym2 (R)] + dim[Skew2 (R)] = 3 + 1 = 4 = dim[M2 (R)]. 30. Let A ∈ Sym2 (R) so A = AT . A = 31. We know that dim[Mn (R)] = n2 . Let S ∈ Symn (R) and let [Sij ] be the matrix with ones in the (i, j ) and (j, i) positions and zeroes elsewhere. Then the general n × n symmetric matrix can be expressed as: S = a11 S11 + a12 S12 +a13 S13 + · · · + a1n S1n + a22 S22 +a23 S23 + · · · + a2n S2n + ··· + an−1 n−1 Sn−1 n−1 + an−1 n Sn−1 n + ann Snn . We see that S has been resolved into a linear combination of n(n + 1) linearly independent matrices, which therefore form a basis for n + (n − 1) + (n − 2) + · · · + 1 = 2 n(n + 1) Symn (R); hence dim[Sym2 (R)] = . 2 Now let T ∈ Skewn (R) and let [Tij ] be the matrix with one in the (i, j )-position, negative one in the (j, i)position, and zeroes elsewhere, including the main diagonal. Then the general n × n skew-symmetric matrix can be expressed as: T = a12 T12 + a13 T13 +a14 T14 + · · · + a1n T1n + a23 T23 +a24 T24 + · · · + a2n T2n + ··· + an−1 n Tn−1 n (n − 1)n We see that T has been resolved into a linear combination of (n − 1)+(n − 2)+(n − 3)+ · · · +2+1 = 2 (n − 1)n linearly independent vectors, which therefore form a basis for Skewn (R); hence dim[Skewn (R)] = . 2 Consequently, using these results, we have n(n + 1) (n − 1)n dim[Symn (R)] + dim[Skew2 (R)] = + = n2 = dim[Mn (R)]. 2 2 32. (a) S is a two-dimensional subspace of R3 . Consequently, any two linearly independent vectors lying 289 in this subspace determine a basis for S . By inspection we see, for example, that v1 = (4, −1, 0) and v2 = (3, 0, 1) both lie in the plane. Further, since they are not proportional, these vectors are linearly independent. Consequently, a basis for S is {(4, −1, 0), (3, 0, 1)}. (b) To extend the basis obtained in Part (a) to obtain a basis for R3 , we require one more vector that does not lie in S . For example, since v3 = (1, 0, 0) does not lie on the plane it is an appropriate vector. Consequently, a basis for R3 is {(4, −1, 0), (3, 0, 1), (1, 0, 0)}. 33. Each vector in S can be written as ab ba 1 0 =a 0 1 0 1 +b 1 0 1 0 Consequently, a basis for S is given by the linearly independent set basis to M2 (R), we can choose, for example, the two vectors independent set 1 0 0 1 , 0 1 1 0 , 1 0 0 0 , 0 0 1 0 1 0 . 0 0 0 1 and 0 1 , 0 0 1 0 1 0 . To extend this . Then the linearly is a basis for M2 (R). 34. The vectors in S can be expressed as (2a1 + a2 )x2 + (a1 + a2 )x + (3a1 − a2 ) = a1 (2x2 + x + 3) + a2 (x2 + x − 1), and since {2x2 + x + 3, x2 + x − 1} is linearly independent (these polynomials are non-proportional) and certainly span S , they form a basis for S . To extend this basis to V = P2 , we must include one additional vector (since P2 is 3-dimensional). Any polynomial that is not in S will suﬃce. For example, x ∈ S , since x cannot be expressed in the form (2a1 + a2 )x2 + (a1 + a2 )x + (3a1 − a2 ), since the equations 2a1 + a2 = 0, a1 + a2 = 1, 3a1 − a2 = 0 are inconsistent. Thus, the extension we use as a basis for V here is {2x2 + x + 3, x2 + x − 1, x}. Many other correct answers can also be given here. 35. Since S is a basis for Pn−1 , S contains n vectors. Therefore, S ∪ {xn } is a set of n + 1 vectors, which is precisely the dimension of Pn . Moreover, xn does not lie in Pn−1 = span(S ), and therefore, S ∪ {xn } is linearly independent by Problem 47 in Section 4.5. By Corollary 4.6.13, we conclude that S ∪ {xn } is a basis for Pn . 36. Since S is a basis for Pn−1 , S contains n vectors. Therefore, S ∪ {p} is a set of n + 1 vectors, which is precisely the dimension of Pn . Moreover, p does not lie in Pn−1 = span(S ), and therefore, S ∪ {p} is linearly independent by Problem 47 in Section 4.5. By Corollary 4.6.13, we conclude that S ∪ {p} is a basis for Pn . 37. (a) Let ek denote the k th standard basis vector. Then a basis for Cn with scalars in R is given by {e1 , e2 , . . . , en , ie1 , ie2 , . . . , ien }, and the dimension is 2n. (b) Using the notation in part (a), a basis for Cn with scalars in C is given by {e1 , e2 , . . . , en }, and the dimension is n. 290 Solutions to Section 4.7 True-False Review: 1. TRUE. This is the content of Theorem 4.7.1. The existence of such a linear combination comes from the fact that a basis for V must span V , and the uniqueness of such a linear combination follows from the linear independence of the vectors comprising a basis. 2. TRUE. This follows from the Equation [v]B = PB ←C [v]C , which is just Equation (4.7.6) with the roles of B and C reversed. 3. TRUE. The number of columns in the change-of-basis matrix PC ←B is the number of vectors in B , while the number of rows of PC ←B is the number of vectors in C . Since all bases for the vector space V contain the same number of vectors, this implies that PC ←B contains the same number of rows and columns. 4. TRUE. If V is an n-dimensional vector space, then PC ←B PB ←C = In = PB ←C PC ←B , which implies that PC ←B is invertible. 5. TRUE. This follows from the linearity properties: [v − w]B = [v + (−1)w]B = [v]B + [(−1)w]B = [v]B + (−1)[w]B = [v]B − [w]B . 6. FALSE. It depends on the order in which the vectors in the bases B and C are listed. For instance, if we consider the bases B = {(1, 0), (0, 1)} and C = {(0, 1), (1, 0)} for R2 , then although B and C contain the 1 0 same vectors, if we let v = (1, 0), then [v]B = while [v]C = . 0 1 7. FALSE. For instance, if we consider the bases B = {(1, 0), (0, 1)} and C = {(0, 1), (1, 0)} for R2 , and if 1 we let v = (1, 0) and w = (0, 1), then v = w, but [v]B = = [w]C . 0 8. TRUE. If B = {v1 , v2 , . . . , vn }, then the column vector [vi ]B is the ith standard basis vector (1 in the ith position and zeroes elsewhere). Thus, for each i, the ith column of PB ←B consists of a 1 in the ith position and zeroes elsewhere. This describes precisely the identity matrix. Problems: 1. Write (5, −10) = c1 (2, −2) + c2 (1, 4). Then 2c1 + c2 = 5 and − 2c1 + 4c2 = −10. Solving this system of equations gives c1 = 3 and c2 = −1. Thus, [v]B = 3 −1 . 2. Write (8, −2) = c1 (−1, 3) + c2 (3, 2). Then −c1 + 3c2 = 8 and 3c1 + 2c2 = −2. 291 Solving this system of equations gives c1 = −2 and c2 = 2. Thus, [v]B = −2 2 . 3. Write (−9, 1, −8) = c1 (1, 0, 1) + c2 (1, 1, −1) + c3 (2, 0, 1). Then c1 + c2 + 2c3 = −9 and c2 = 1 c1 − c2 + c3 = −8. and Solving this system of equations gives c1 = −4, c2 = 1, and c3 = −3. Thus, −4 [v]B = 1 . −3 4. Write (1, 7, 7) = c1 (1, −6, 3) + c2 (0, 5, −1) + c3 (3, −1, −1). Then c1 + 3c3 = 1 and − 6c1 + 5c2 − c3 = 7 and 3c1 − c2 − c3 = 7. Using Gaussian elimination to solve this system of equations gives c1 = 4, c2 = 6, and c3 = −1. Thus, 4 [v]B = 6 . −1 5. Write (1, 7, 7) = c1 (3, −1, −1) + c2 (1, −6, 3) + c3 (0, 5, −1). Then 3c1 + c2 = 1 and − c1 − 6c2 + 5c3 = 7 and − c1 + 3c2 − c3 = 7. Using Gaussian elimination to solve this system of equations gives c1 = −1, c2 = 4, and c3 = 6. Thus, −1 [v]B = 4 . 6 6. Write (5, 5, 5) = c1 (−1, 0, 0) + c2 (0, 0, −3) + c3 (0, −2, 0). Then −c1 = 5 Therefore, c1 = −5, c2 = 5 −3, and c3 = and −5. 2 − 2c3 = 5 and Thus, −5 [v]B = −5/3 . −5/2 − 3c2 = 5. 292 7. Write −4x2 + 2x + 6 = c1 (x2 + x) + c2 (2 + 2x) + c3 (1). Equating the powers of x on each side, we have c1 = −4 and c1 + 2c2 = 2 and 2c2 + c3 = 6. Solving this system of equations, we ﬁnd that c1 = −4, c2 = 3, and c3 = 0. Hence, −4 [p(x)]B = 3 . 0 8. Write 15 − 18x − 30x2 = c1 (5 − 3x) + c2 (1) + c3 (1 + 2x2 ). Equating the powers of x on each side, we have 5c1 + c2 + c3 = 15 and − 3c1 = −18 and 2c3 = −30. Solving this system of equations, we ﬁnd that c1 = 6, c2 = 0, and c3 = −15. Hence, 6 0 . [p(x)]B = −15 9. Write 4 − x + x2 − 2x3 = c1 (1) + c2 (1 + x) + c3 (1 + x + x2 ) + c4 (1 + x + x2 + x3 ). Equating the powers of x on each side, we have c1 + c2 + c3 + c4 = 4 and c2 + c3 + c4 = −1 and c3 + c4 = 1 and c4 = −2. Solving this system of equations, we ﬁnd that c1 = 5, c2 = −2, c3 = 3, and c4 = −2. Hence, 5 −2 [p(x)]B = 3 . −2 10. Write 8 + x + 6x2 + 9x3 = c1 (x3 + x2 ) + c2 (x3 − 1) + c3 (x3 + 1) + c4 (x3 + x). Equating the powers of x on each side, we have −c2 + c3 = 8 and c4 = 1 and c1 = 6 and c1 + c2 + c3 + c4 = 9. Solving this system of equations, we ﬁnd that c1 = 6, c2 = −3, c3 = 5, and c4 = 1. Hence 6 −3 [p(x)]B = 5 . 1 293 11. Write −3 −2 −1 2 1 1 = c1 1 1 1 1 + c2 1 0 1 0 + c3 1 0 1 0 + c4 0 0 . Equating the individual entries of the matrices on each side of this equation (upper left, upper right, lower left, and lower right, respectively) gives c1 + c2 + c3 + c4 = −3 c1 + c2 + c3 = −2 and c1 + c2 = −1 and and c1 = 2. Solving this system of equations, we ﬁnd that c1 = 2, c2 = −3, c3 = −1, and c4 = −1. Thus, 2 −3 [A]B = −1 . −1 12. Write −10 16 −15 −14 = c1 2 −1 3 5 + c2 0 −1 4 1 + c3 1 1 1 1 + c4 3 −1 2 5 . Equating the individual entries of the matrices on each side of this equation (upper left, upper right, lower left, and lower right, respectively) gives 2c1 + c3 + 3c4 = −10, −c1 + 4c2 + c3 − c4 = 16, 3c1 − c2 + c3 + 2c4 = −15, 5c1 + c2 + c3 + 5c4 = −14. Solving this system of equations, we ﬁnd that c1 = −2, c2 = 4, c3 = −3, and c4 = −1. Thus, −2 4 [A]B = −3 . −1 13. Write 5 7 6 8 = c1 −1 0 1 1 + c2 1 −1 3 0 + c3 1 1 0 2 + c4 0 −1 2 3 . Equating the individual entries of the matrices on each side of this equation (upper left, upper right, lower left, and lower right, respectively) gives −c1 + c2 + c3 = 5 and c1 + 3c2 − c4 = 6 and − c2 + c3 + 2c4 = 7 and c1 + 2c3 + 3c4 = 8. Solving this system of equations, we ﬁnd that c1 = −34/3, c2 = 12, c3 = −55/3, and c4 = 56/3. Thus, −34/3 12 [A]B = −55/3 . 56/3 14. Write (x, y, z ) = c1 (0, 6, 3) + c2 (3, 0, 3) + c3 (6, −3, 0). 294 Then 6c1 − 3c3 = y and 3c1 + 3c2 = z. 33 0z The augmented matrix for this linear system is 6 0 −3 y . We can reduce this to row-echelon form 03 6z z/3 110 . Solving this system by back-substitution gives x/3 as 0 1 2 0 0 9 y + 2x − 2z 3c2 + 6c3 = x c1 = 1 2 1 x+ y− z 9 9 9 and 1 2 4 c2 = − x − y + z 9 9 9 and and c3 = 2 1 2 x + y − z. 9 9 9 Hence, denote the ordered basis {v1 , v2 , v3 } by B , we have 1 2 1 9x + 9y − 9z [v]B = − 1 x − 2 y + 4 z . 9 9 9 1 2 2 9x + 9y − 9z 15. Write a0 + a1 x + a2 x2 = c1 (1 + x) + c2 x(x − 1) + c3 (1 + 2x2 ). Equating powers of x on both sides of this equation, we have c1 + c3 = a0 c1 − c2 = a1 c2 + 2c3 = a2 . 1 0 1 a0 The augmented matrix corresponding to this system of equations is 1 −1 0 a1 . We can reduce this 0 1 2 a2 101 a0 . Thus, solving by back-substitution, we have a2 to row-echelon form as 0 1 2 0 0 1 −a0 + a1 + a2 c1 = 2a0 − a1 − a2 and and and c2 = 2a0 − 2a1 − a2 and c3 = −a0 + a1 + a2 . Hence, relative to the ordered basis B = {p1 , p2 , p3 }, we have 2a0 − a1 − a2 [p(x)]B = 2a0 − 2a1 − a2 . −a0 + a1 + a2 16. Let v1 = (9, 2) and v2 = (4, −3). Setting (9, 2) = c1 (2, 1) + c2 (−3, 1) and solving, we ﬁnd c1 = 3 and c2 = −1. Thus, [v1 ]C = 3 −1 . Next, setting (4, −3) = c1 (2, 1) + c2 (−3, 1) and solving, we ﬁnd c1 = −1 and c2 = −2. Thus, [v2 ]C = PC ←B = −1 −2 3 −1 −1 −2 . Therefore, . 295 17. Let v1 = (−5, −3) and v2 = (4, 28). Setting (−5, −3) = c1 (6, 2) + c2 (1, −1) and solving, we ﬁnd c1 = −1 and c2 = 1. Thus, [v1 ]C = −1 1 . Next, setting (4, 28) = c1 (6, 2) + c2 (1, −1) and solving, we ﬁnd c1 = 4 and c2 = −20. Thus, [v2 ]C = PC ←B = 4 −20 −1 4 1 −20 . Therefore, . 18. Let v1 = (2, −5, 0), v2 = (3, 0, 5), and v3 = (8, −2, −9). Setting (2, −5, 0) = c1 (1, −1, 1) + c2 (2, 0, 1) + c3 (0, 1, 3) 4 and solving, we ﬁnd c1 = 4, c2 = −1, and c3 = −1. Thus, [v1 ]C = −1 . Next, setting −1 (3, 0, 5) = c1 (1, −1, 1) + c2 (2, 0, 1) + c3 (0, 1, 3) 1 and solving, we ﬁnd c1 = 1, c2 = 1, and c3 = 1. Thus, [v2 ]C = 1 . Finally, setting 1 (8, −2, −9) = c1 (1, −1, 1) + c2 (2, 0, 1) + c3 (0, 1, 3) −2 and solving, we ﬁnd c1 = −2, c2 = 5, and c3 = −4. Thus, [v3 ]C = 5 . Therefore, −4 4 1 −2 5 . PC ←B = −1 1 −1 1 −4 19. Let v1 = (−7, 4, 4), v2 = (4, 2, −1), and v3 = (−7, 5, 0). Setting (−7, 4, 4) = c1 (1, 1, 0) + c2 (0, 1, 1) + c3 (3, −1, −1) 0 and solving, we ﬁnd c1 = 0, c2 = 5/3 and c3 = −7/3. Thus, [v1 ]C = 5/3 . Next, setting −7/3 (4, 2, −1) = c1 (1, 1, 0) + c2 (0, 1, 1) + c3 (3, −1, −1) and solving, we ﬁnd c1 = 0, c2 = 19/3 and c3 = −7/3. Setting (4, 2, −1) = c1 (1, 1, 0) + c2 (0, 1, 1) + c3 (3, −1, −1) 296 3 and solving, we ﬁnd c1 = 3, c2 = −2/3, and c3 = 1/3. Thus, [v2 ]C = −2/3 . Finally, setting 1/3 (−7, 5, 0) = c1 (1, 1, 0) + c2 (0, 1, 1) + c3 (3, −1, −1) 5 and solving, we ﬁnd c1 = 5, c2 = −4, and c3 = −4. Hence, [v3 ]C = −4 . Therefore, −4 0 3 5 2 5 PC ←B = 3 − 3 −4 . 1 −7 −4 3 3 20. Let v1 = 7 − 4x and v2 = 5x. Setting 7 − 4x = c1 (1 − 2x) + c2 (2 + x) 3 2 and solving, we ﬁnd c1 = 3 and c2 = 2. Thus, [v1 ]C = . Next, setting 5x = c1 (1 − 2x) + c2 (2 + x) and solving, we ﬁnd c1 = −2 and c2 = 1. Hence, [v2 ]C = PC ←B = −2 1 3 −2 2 1 . Therefore, . 21. Let v1 = −4 + x − 6x2 , v2 = 6 + 2x2 , and v3 = −6 − 2x + 4x2 . Setting −4 + x − 6x2 = c1 (1 − x + 3x2 ) + c2 (2) + c3 (3 + x2 ) −1 and solving, we ﬁnd c1 = −1, c2 = 3, and c3 = −3. Thus, [v1 ]C = 3 . Next, setting −3 6 + 2x2 = c1 (1 − x + 3x2 ) + c2 (2) + c3 (3 + x2 ) 0 and solving, we ﬁnd c1 = 0, c2 = 0, and c3 = 2. Thus, [v2 ]C = 0 . Finally, setting 2 −6 − 2x + 4x2 = c1 (1 − x + 3x2 ) + c2 (2) + c3 (3 + x2 ) 2 and solving, we ﬁnd c1 = 2, c2 = −1, and c3 = −2. Thus, [v3 ]C = −1 . Therefore, −2 −1 0 2 PC ←B = 3 0 −1 . −3 2 −2 297 22. Let v1 = −2 + 3x + 4x2 − x3 , v2 = 3x + 5x2 + 2x3 , v3 = −5x2 − 5x3 , and v4 = 4 + 4x + 4x2 . Setting −2 + 3x + 4x2 − x3 = c1 (1 − x3 ) + c2 (1 + x) + c3 (x + x2 ) + c4 (x2 + x3 ) 0 −2 and solving, we ﬁnd c1 = 0, c2 = −2, c3 = 5, and c4 = −1. Thus, [v1 ]C = 5 . Next, setting −1 3x + 5x2 + 2x3 = c1 (1 − x3 ) + c2 (1 + x) + c3 (x + x2 ) + c4 (x2 + x3 ) 0 0 and solving, we ﬁnd c1 = 0, c2 = 0, c3 = 3, and c4 = 2. Thus, [v2 ]C = . Next, solving 3 2 −5x2 − 5x3 = c1 (1 − x3 ) + c2 (1 + x) + c3 (x + x2 ) + c4 (x2 + x3 ) 0 0 and solving, we ﬁnd c1 = 0, c2 = 0, c3 = 0, and c4 = −5. Thus, [v3 ]C = 0 . Finally, setting −5 4 + 4x + 4x2 = c1 (1 − x3 ) + c2 (1 + x) + c3 (x + x2 ) + c4 (x2 + x3 ) 2 2 and solving, we ﬁnd c1 = 2, c2 = 2, c3 = 2, and c4 = 2. Thus, [v4 ]C = . Therefore, 2 2 00 02 −2 0 0 2 . PC ←B = 53 0 2 −1 2 −5 2 23. Let v1 = 2 + x2 , v2 = −1 − 6x + 8x2 , and v3 = −7 − 3x − 9x2 . Setting 2 + x2 = c1 (1 + x) + c2 (−x + x2 ) + c3 (1 + 2x2 ) 3 and solving, we ﬁnd c1 = 3, c2 = 3, and c3 = −1. Thus, [v1 ]C = 3 . Next, solving −1 −1 − 6x + 8x2 = c1 (1 + x) + c2 (−x + x2 ) + c3 (1 + 2x2 ) −4 and solving, we ﬁnd c1 = −4, c2 = 2, and c3 = 3. Thus, [v2 ]C = 2 . Finally, solving 3 −7 − 3x − 9x2 = c1 (1 + x) + c2 (−x + x2 ) + c3 (1 + 2x2 ) 298 −2 and solving, we ﬁnd c1 = −2, c2 = 1, and c3 = −5. Thus, [v3 ]C = 1 . Therefore, −5 3 −4 −2 2 1 . PC ←B = 3 −1 3 −5 24. Let v1 = 1 0 −1 −2 , v2 = 1 0 −1 −2 0 −1 3 0 , v3 = 3 0 5 0 , and v4 = 1 1 + c2 1 1 1 0 + c3 = c1 1 1 1 0 −2 −4 0 0 1 0 . Setting 1 0 + c4 0 0 −2 1 and solving, we ﬁnd c1 = −2, c2 = c3 = c4 = 1. Thus, [v1 ]C = 1 . Next, setting 1 0 −1 3 0 c1 1 1 1 1 1 1 + c2 1 0 1 0 + c3 1 0 + c4 1 0 0 0 0 3 and solving, we ﬁnd c1 = 0, c2 = 3, c3 = −4, and c4 = 1. Thus, [v2 ]C = −4 . Next, setting 1 3 0 5 0 c1 1 1 1 1 1 1 + c2 1 0 + c3 1 0 1 0 + c4 and solving, we ﬁnd c1 = 0, c2 = 0, c3 = 5, and c4 = −2. Thus, [v3 ]C = −2 −4 0 0 c1 1 1 1 1 + c2 1 1 1 0 1 0 + c3 1 0 10 00 0 0 . Finally, setting 5 −2 + c4 1 0 0 0 and solving, we ﬁnd c1 = 0, c2 = 0, c3 = −4, and c4 = 2. Thus, PC ←B = 25. Let v1 = E12 , v2 = E22 , v3 = E21 , and v4 0 0 [v1 ]C = , [v2 ]C = 0 1 −2 0 0 1 3 0 1 −4 5 1 1 −2 0 0 [v4 ]C = −4 . Therefore, we have 2 0 0 . −4 2 = E11 . We see by 1 0 , [v3 ]C = 0 0 inspection that 0 0 , [v4 ]C = 1 0 0 1 . 0 0 299 Therefore, PC ←B 0 0 = 0 1 1 0 0 0 0 0 1 0 0 1 . 0 0 26. We could simply compute the inverse of the matrix obtained in Problem 16. For instructive purposes, however, we proceed directly. Let w1 = (2, 1) and w2 = (−3, 1). Setting (2, 1) = c1 (9, 2) + c2 (4, −3) 2/7 −1/7 and solving, we obtain c1 = 2/7 and c2 = −1/7. Thus, [w1 ]B = . Next, setting (−3, 1) = c1 (9, 2) + c2 (4, −3) and solving, we obtain c1 = −1/7 and c2 = −3/7. Thus, [w2 ]B = PB ←C = 2 7 1 −7 −1 7 3 −7 −1/7 −3/7 . Therefore, . 27. We could simply compute the inverse of the matrix obtained in Problem 17. For instructive purposes, however, we proceed directly. Let w1 = (6, 2) and w2 = (1, −1). Setting (6, 2) = c1 (−5, −3) + c2 (4, 28) and solving, we obtain c1 = −5/4 and c2 = −1/16. Thus, [w1 ]B = −5/4 −1/16 . Next, setting (1, −1) = c1 (−5, −3) + c2 (4, 28) and solving, we obtain c1 = −1/4 and c2 = −1/16. Thus, [w2 ]B = PB ←C = 5 −4 1 −4 1 − 16 1 − 16 −1/4 −1/16 . Therefore, . 28. We could simply compute the inverse of the matrix obtained in Problem 18. For instructive purposes, however, we proceed directly. Let w1 = (1, −1, 1), w2 = (2, 0, 1), and w3 = (0, 1, 3). Setting (1, −1, 1) = c1 (2, −5, 0) + c2 (3, 0, 5) + c3 (8, −2, −9) 1/5 and solving, we ﬁnd c1 = 1/5, c2 = 1/5, and c3 = 0. Thus, [w1 ]B = 1/5 . Next, setting 0 (2, 0, 1) = c1 (2, −5, 0) + c2 (3, 0, 5) + c3 (8, −2, −9) 300 −2/45 and solving, we ﬁnd c1 = −2/45, c2 = 2/5, and c3 = 1/9. Thus, [w2 ]B = 2/5 . Finally, setting 1/9 (0, 1, 3) = c1 (2, −5, 0) + c2 (3, 0, 5) + c3 (8, −2, −9) −7/45 and solving, we ﬁnd c1 = −7/45, c2 = 2/5, and c3 = −1/9. Thus, [w3 ]B = 2/5 . Therefore, −1/9 1 2 7 − 45 − 45 5 2 2 PB ←C = 1 5 5 . 5 1 1 0 −9 9 29. We could simply compute the inverse of the matrix obtained in Problem 20. For instructive purposes, however, we proceed directly. Let w1 = 1 − 2x and w2 = 2 + x. Setting 1 − 2x = c1 (7 − 4x) + c2 (5x) and solving, we ﬁnd c1 = 1/7 and c2 = −2/7. Thus, [w1 ]B = 1/7 −2/7 . Setting 2 + x = c1 (7 − 4x) + c2 (5x) and solving, we ﬁnd c1 = 2/7 and c2 = 3/7. Thus, [w2 ]B = PB ←C = 30. Referring to Problem 22, we have −1 PB ←C = (PC ←B ) 0 −2 = 5 −1 1 7 2 7 −2 7 3 7 2/7 3/7 . Therefore, . −1 1/2 0 02 −7/6 0 0 2 = −11/30 3 0 2 1/2 2 −5 2 −1/2 5/6 13/30 0 0 0 1/3 0 . 2/15 −1/5 0 0 31. We could simply compute the inverse of the matrix obtained in Problem 25. For instructive purposes, however, we proceed directly. Let w1 = E22 , w2 = E11 , w3 = E21 , and w4 = E12 . We see by inspection that 0 0 0 1 1 , [w2 ]B = 0 , [w3 ]B = 0 , [w4 ]B = 0 . [w1 ]B = 0 1 0 0 0 1 0 0 Therefore PB ←C 0 1 = 0 0 0 0 0 1 0 0 1 0 1 0 . 0 0 301 32. We ﬁrst compute [v]B and [v]C directly. Setting (−5, 3) = c1 (9, 2) + c2 (4, −3) 3 − 35 and solving, we obtain c1 = −3/35 and c2 = −37/35. Thus, [v]B = (−5, 3) = c1 (2, 1) + c2 (−3, 1) and solving, we obtain c1 = 4/5 and c2 = 11/5. Thus, [v]C = PC ←B = 3 −1 −1 −2 − 37 35 4 5 11 5 . Setting . Now, according to Problem 16, , so PC ←B [v]B = 3 −1 −1 −2 3 − 35 − 37 35 = 4 5 11 5 = [v]C , which conﬁrms Equation (4.7.6). 33. We ﬁrst compute [v]B and [v]C directly. Setting (−1, 2, 0) = c1 (−7, 4, 4) + c2 (4, 2, −1) + c3 (−7, 5, 0) and solving, we obtain c1 = 3/43, c2 = 12/43, and c3 = 10/43. Thus, [v]B = 3 43 12 43 . Setting 10 43 (−1, 2, 0) = c1 (1, 1, 0) + c2 (0, 1, 1) + c3 (3, −1, −1) 2 and solving, we obtain c1 = 2, c2 = −1, and c3 = −1. Thus, [v]C = −1 . Now, according to Problem −1 3 0 3 5 0 3 5 2 43 5 2 , so PC ←B [v]B = 5 − 2 −4 12 = −1 = [v]C , which 19, PC ←B = 3 − 3 −4 3 3 43 7 1 7 1 10 −1 −3 −4 −3 −4 3 3 43 conﬁrms Equation (4.7.6). 34. We ﬁrst compute [v]B and [v]C directly. Setting 6 − 4x = c1 (7 − 4x) + c2 (5x) and solving, we obtain c1 = 6/7 and c2 = −4/35. Thus, [v]B = 6 7 4 − 35 6 − 4x = c1 (1 − 2x) + c2 (2 + x) . Next, setting 302 and solving, we obtain c1 = 14/5 and c2 = 8/5. Thus, [v]C = 3 −2 2 1 PC ←B = 14 5 8 5 . Now, according to Problem 20, , so 3 −2 PC ←B [v]B = 2 1 6 7 4 − 35 = 14 5 8 5 = [v]C , which conﬁrms Equation (4.7.6). 35. We ﬁrst compute [v]B and [v]C directly. Setting 5 − x + 3x2 = c1 (−4 + x − 6x2 ) + c2 (6 + 2x2 ) + c3 (−6 − 2x + 4x2 ) 1 5 and solving, we obtain c1 = 1, c2 = 5/2, and c3 = 1. Thus, [v]B = 2 . Next, setting 1 and solving, we PC ←B = −1 3 −3 5 − x + 3x2 = c1 (1 − x + 3x2 ) + c2 (2) + c3 (3 + x2 ) 1 obtain c1 = 1, c2 = 2, and c3 = 0. Thus, [v]C = 2 . Now, according to Problem 21, 0 0 2 0 −1 , so 2 −2 1 1 −1 0 2 PC ←B [v]B = 3 0 −1 5 = 2 = [v]C , 2 0 −3 2 −2 1 which conﬁrms Equation (4.7.6). 36. We ﬁrst compute [v]B and [v]C directly. Setting −1 −1 −4 5 = c1 1 0 −1 −2 + c2 0 −1 3 0 + c3 3 0 5 0 + c4 −2 −4 0 0 −5/2 −13/6 and solving, we obtain c1 = −5/2, c2 = −13/6, c3 = 37/6, and c4 = 17/2. Thus, [v]B = 37/6 . Next, 17/2 setting −1 −1 11 11 11 10 = c1 + c2 + c3 + c4 −4 5 11 10 00 00 5 −9 and solving, we ﬁnd c1 = 5, c2 = −9, c3 = 3, and c4 = 0. Thus, [v]C = 3 . Now, according to Problem 0 303 24, PC ←B −2 0 0 0 1 3 0 0 , and so = 1 −4 5 −4 1 1 −2 2 −2 0 0 0 1 3 0 0 PC ←B [v]B = 1 −4 5 −4 1 1 −2 2 5 −5/2 −13/6 −9 37/6 = 3 = [v]C , 0 17/2 which conﬁrms Equation (4.7.6). 37. Write x = a1 v1 + a2 v2 + · · · + an vn . We have cx = c(a1 v1 + a2 v2 + · · · + an vn ) = (ca1 )v1 + (ca2 )v2 + · · · + (can )vn . Hence, [cx]B = ca1 ca2 . . . can = c a1 a2 . . . = c[x]B . an 38. We must show that {v1 , v2 , . . . , vn } is linearly independent and spans V . Check linear independence: Assume that c1 v1 + c2 v2 + · · · + cn vn = 0. We wish to show that c1 = c2 = · · · = cn = 0. Now by assumption, the zero vector 0 can be uniquely written as a linear combination of the vectors in {v1 , v2 , . . . , vn }. Since 0 = 0 · v1 + 0 · v2 + · · · + 0 · vn , we therefore conclude that c1 = c2 = · · · = cn = 0, as needed. Check spanning property: Let v be an arbitrary vector in V . By assumption, it is possible to express v (uniquely) as a linear combination of the vectors in {v1 , v2 , . . . , vn }; say v = a1 v1 + a2 v2 + · · · + an vn . Therefore, v lies in span{v1 , v2 , . . . , vn }. Since v is an arbitrary member of V , we conclude that {v1 , v2 , . . . , vn } spans V . 39. Let B = {v1 , v2 , . . . , vn } and let C = {vσ(1) , vσ(2) , . . . , vσ(n) }, where σ is a permutation of the set {1, 2, . . . , n}. We will show that PC ←B contains exactly one 1 in each row and column, and zeroes elsewhere (the argument for PB ←C is essentially identical, or can be deduced from the fact that PB ←C is the inverse of PC ←B ). Let i be in {1, 2, . . . , n}. The ith column of PC ←B is [vi ]C . Suppose that ki ∈ {1, 2, . . . , n} is such that σ (ki ) = i. Then [vi ]C is a column vector with a 1 in the ki th position and zeroes elsewhere. Since the values k1 , k2 , . . . , kn are distinct, we see that each column of PC ←B contains a single 1 (with zeroes elsewhere) in a diﬀerent position from any other column. Hence, when we consider all n columns as a whole, each position in the column vector must have a 1 occurring exactly once in one of the columns. Therefore, PC ←B contains exactly one 1 in each row and column, and zeroes elsewhere. 304 Solutions to Section 4.8 True-False Review: 1. TRUE. Note that rowspace(A) is a subspace of Rn and colspace(A) is a subspace of Rm , so certainly if rowspace(A) = colspace(A), then Rn and Rm must be the same. That is, m = n. 2. FALSE. A basis for the row space of A consists of the nonzero row vectors of any row-echelon form of A. 3. FALSE. The nonzero column vectors of the original matrix A that correspond to the nonzero column vectors of a row-echelon form of A form a basis for colspace(A). 4. TRUE. Both rowspace(A) and colspace(A) have dimension equal to rank(A), the number of nonzero rows in a row-echelon form of A. Equivalently, their dimensions are both equal to the number of pivots occurring in a row-echelon form of A. 5. TRUE. For an invertible n × n matrix, rank(A) = n. That means there are n nonzero rows in a rowechelon form of A, and so rowspace(A) is n-dimensional. Therefore, we conclude that rowspace(A) = Rn . 6. TRUE. This follows immediately from the (true) statements in True-False Review Questions 4-5 above. Problems: 1. A row-echelon form of A is 1 −2 0 0 . Consequently, a basis for rowspace(A) is {(1, −2)}, whereas a basis for colspace(A) is {(1, −3)}. 1 1 −3 2 . Consequently, a basis for rowspace(A) is 0 1 −2 1 {(1, 1, −3, 2), (0, 1, −2, 1)}, whereas a basis for colspace(A) is {(1, 3), (1, 4)}. 123 3. A row-echelon form of A is 0 1 2 . Consequently, a basis for rowspace(A) is {(1, 2, 3), (0, 1, 2)}, 000 whereas a basis for colspace(A) is {(1, 5, 9), (2, 6, 10)}. 011 3 1 4. A row-echelon form of A is 0 0 0 . Consequently, a basis for rowspace(A) is {(0, 1, 3 )}, whereas 000 a basis for colspace(A) is {(3, −6, 12)}. 1 2 −1 3 0 0 0 1 . Consequently, a basis for rowspace(A) is 5. A row-echelon form of A is 0 0 0 0 00 00 {(1, 2, −1, 3), (0, 0, 0, 1)}, whereas a basis for colspace(A) is {(1, 3, 1, 5), (3, 5, −1, 7)}. 1 −1 2 3 2 −4 3 (Note: This is not row-echelon form, but it is not nec6. We can reduce A to 0 0 0 6 −13 essary to bring the leading nonzero element in each row to 1.). Consequently, a basis for rowspace(A) is {(1, −1, 2, 3), (0, 2, −4, 3), (0, 0, 6, −13)}, whereas a basis for colspace(A) is {(1, 1, 3), (−1, 1, 1), (2, −2, 4)}. 1 −1 2 1 . A row-echelon form of this matrix is 7. We determine a basis for the rowspace of the matrix 5 −4 7 −5 −4 2. A row-echelon form of A is 305 1 −1 2 0 1 −9 . Consequently, a basis for the subspace spanned by the given vectors is {(1, −1, 2), (0, 1, −9)}. 0 0 0 13 3 1 5 −1 . A row-echelon form of this matrix is 8. We determine a basis for the rowspace of the matrix 2 7 4 14 1 13 3 0 1 −2 . Consequently, a basis for the subspace spanned by the given vectors is {(1, 3, 3), (0, 1, −2)}. 0 0 0 00 0 1 1 −1 2 3 −4 . A row-echelon form of 9. We determine a basis for the rowspace of the matrix 2 1 1 2 −6 10 1 1 −1 2 this matrix is 0 1 −5 8 . Consequently, a basis for the subspace spanned by the given vectors is 00 00 {(1, 1, −1, 2), (0, 1, −5, 8)}. 1413 2 8 3 5 10. We determine a basis for the rowspace of the matrix 1 4 0 4 . A row-echelon form of this 2826 141 3 0 0 1 −1 . Consequently, a basis for the subspace spanned by the given vectors is matrix is 000 0 000 0 {(1, 4, 1, 3), (0, 0, 1, −1)}. 1 −3 . Consequently, a basis for rowspace(A) is {(1, −3)}, whereas a 0 0 basis for colspace(A) is {(−3, 1)}. Both of these subspaces are lines in the xy -plane. 124 12. (a) A row-echelon form of A is 0 1 1 . Consequently, a basis for rowspace(A) is 000 {(1, 2, 4), (0, 1, 1)}, whereas a basis for colspace(A) is {(1, 5, 3), (2, 11, 7)}. 11. A row-echelon form of A is (b) We see that both of these subspaces are 2-dimensional, and therefore each corresponds geometrically to a plane. By inspection, we see that the two basis vectors for rowspace(A) satisfy the equations 2x + y − z = 0, and therefore rowspace(A) corresponds to the plane with this equation. Similarly, we see that the two basis vectors for colspace(A) satisfy the equations 2x − y + z = 0, and therefore colspace(A) corresponds to the plane with this equations. 11 , colspace(A) is spanned by (1, 2), but if we permute the two rows of A, we obtain a 22 new matrix whose column space is spanned by (2, 1). On the other hand, if we multiply the ﬁrst row by 2, then we obtain a new matrix whose column space is spanned by (2, 2). And if we add −2 times the ﬁrst row to the second row, we obtain a new matrix whose column space is spanned by (1, 0). Therefore, in all cases, colspace(A) is altered by the row operations performed. 13. If A = 306 1 1 . We have −1 −1 rowspace(A) = {(r, r) : r ∈ R}, while colspace(A) = {(r, −r) : r ∈ R}. Thus, rowspace(A) and colspace(A) have no nonzero vectors in common. 14. Many examples are possible here, but an easy 2 × 2 example is the matrix A = Solutions to Section 4.9 True-False Review: 1. FALSE. For example, consider the 7 × 3 zero matrix, 07×3 . We have rank(07×3 ) = 0, and therefore by the Rank-Nullity Theorem, nullity(07×3 ) = 3. But |m − n| = |7 − 3| = 4. Many other examples can be given. In particular, provided that m > 2n, the m × n zero matrix will show the falsity of the statement. 2. FALSE. In this case, rowspace(A) is a subspace of R9 , hence it cannot possibly be equal to R7 . The correct conclusion here should have been that rowspace(A) is a 7-dimensional subspace of R9 . 3. TRUE. By the Rank-Nullity Theorem, rank(A) = 7, and therefore, rowspace(A) is a 7-dimensional subspace of R7 . Hence, rowspace(A) = R7 . 01 is an upper triangular matrix with two zeros appearing 00 on the main diagonal. However, since rank(A) = 1, we also have nullity(A) = 1. 4. FALSE. For instance, the matrix A = 5. TRUE. An invertible matrix A must have nullspace(A) = {0}, but if colspace(A) is also {0}, then A would be the zero matrix, which is certainly not invertible. 6. FALSE. For instance, if we take A = 1 + 1 = 2, but A + B = 1 0 1 0 1 0 0 0 0 0 and B = 1 0 , then nullity(A)+ nullity(B ) = , and nullity(A + B ) = 1. 7. FALSE. For instance, if we take A = 1 0 0 0 and B = 0 0 0 1 , then nullity(A)·nullity(B ) = 1 · 1 = 1, but AB = 02 , and nullity(AB ) = 2. 8. TRUE. If x belongs to the nullspace of B , then B x = 0. Therefore, (AB )x = A(B x) = A0 = 0, so that x also belongs to the nullspace of AB . Thus, nullspace(B ) is a subspace of nullspace(AB ). Hence, nullity(B ) ≤ nullity(AB ), as claimed. 9. TRUE. If y belongs to nullspace(A), then Ay = 0. Hence, if Axp = b, then A(y + xp ) = Ay + Axp = 0 + b = b, which demonstrates that y + xp is also a solution to the linear system Ax = b. Problems: 1. The matrix is already in row-echelon form. A vector (x, y, z, w) in nullspace(A) must satisfy x − 6z − w = 0. We see that y ,z , and w are free variables, and 6 1 6z + w 0 y 100 : y, z, w ∈ R = span nullspace(A) = , , . z 1 0 0 w 0 0 1 307 Therefore, nullity(A) = 3. Moreover, since this row-echelon form contains one nonzero row, rank(A) = 1. Since the number of columns of A is 4 = 1 + 3, the Rank-Nullity Theorem is veriﬁed. 2. We bring A to row-echelon form: 1 −1 2 0 0 REF(A) = . 1 A vector (x, y ) in nullspace(A) must satisfy x − 2 y = 0. Setting y = 2t, we get x = t. Therefore, nullspace(A) = t 2t :t∈R = span 1 2 . Therefore, nullity(A) = 1. Since REF(A) contains one nonzero row, rank(A) = 1. Since the number of columns of A is 2 = 1 + 1, the Rank-Nullity Theorem is veriﬁed. 3. We bring A to row-echelon form: 1 REF(A) = 0 0 1 −1 1 7 . 0 1 Since there are no unpivoted columns, there are no free variables in the associated homogeneous linear system, and so nullspace(A) = {0}. Therefore, nullity(A) = 0. Since REF(A) contains three nonzero rows, rank(A) = 3. Since the number of columns of A is 3 = 3 + 0, the Rank-Nullity Theorem is veriﬁed. 4. We bring A to row-echelon form: 1 REF(A) = 0 0 4 −1 3 1 1 1 . 0 00 A vector (x, y, z, w) in nullspace(A) must satisfy x +4y − z +3w = 0 and y + z + w = 0. We see that z and w are free variables. Set z = t and w = s. Then y = −z − w = −t − s and x = z − 4y − 3w = t − 4(−t − s) − 3s = 5t + s. Therefore, 5 1 5t + s −t − s : s, t ∈ R = span −1 , −1 . nullspace(A) = t 1 0 s 0 1 Therefore, nullity(A) = 2. Moreover, since REF(A) contains two nonzero rows, rank(A) = 2. Since the number of columns of A is 4 = 2 + 2, the Rank-Nullity Theorem is veriﬁed. 5. Since all rows (or columns) of this matrix are proportional to the ﬁrst one, rank(A) = 1. Since A has two columns, we conclude from the Rank-Nullity Theorem that nullity(A) = 2 − rank(A) = 2 − 1 = 1. 6. The ﬁrst and last rows of A are not proportional, but the middle rows are proportional to the ﬁrst row. Therefore, rank(A) = 2. Since A has ﬁve columns, we conclude from the Rank-Nullity Theorem that nullity(A) = 5 − rank(A) = 5 − 2 = 3. 308 7. Since the second and third columns are not proportional and the ﬁrst column is all zeros, we have rank(A) = 2. Since A has three columns, we conclude from the Rank-Nullity Theorem that nullity(A) = 3 − rank(A) = 3 − 2 = 1. 8. This matrix (already in row-echelon form) has one nonzero row, so rank(A) = 1. Since it has four columns, we conclude from the Rank-Nullity Theorem that nullity(A) = 4 − rank(A) = 4 − 1 = 3. 1 3 −1 4 9 11 . We quickly reduce this augmented 9. The augmented matrix for this linear system is 2 7 1 5 21 10 matrix to row-echelon form: 1 3 −1 4 0 1 11 3 . 00 00 A solution (x, y, z ) to the system will have a free variable corresponding to the third column: z = t. Then y + 11t = 3, so y = 3 − 11t. Finally, x + 3y − z = 4, so x = 4 + t − 3(3 − 11t) = −5 + 34t. Thus, the solution set is −5 34 −5 + 34t 3 − 11t : t ∈ R = t −11 + 3 : t ∈ R . t 1 0 −5 34 Observe that xp = 3 is a particular solution to Ax = b, and that −11 forms a basis for 0 1 nullspace(A). Therefore, the set of solution vectors obtained does indeed take the form (4.9.3). 1 −1 2 3 6 10. The augmented matrix for this linear system is 1 −2 5 5 13 . We quickly reduce this aug2 −1 1 4 5 mented matrix to row-echelon form: 1 −1 2 3 6 0 1 −3 −2 −7 . 0 0 0 0 0 A solution (x, y, z, w) to the system will have free variables corresponding to the third and fourth columns: z = t and w = s. Then y − 3z − 2w = −7 requires that y = −7 + 3t + 2s, and x − y + 2z + 3w = 6 requires that x = 6 + (−7 + 3t + 2s) − 2t − 3s = −1 + t − s. Thus, the solution set is −1 + t − s −7 + 3t + 2s t s 1 −1 −1 2 −7 3 : s, t ∈ R = t + : s, t ∈ R . + s 0 0 1 0 1 0 309 −1 −7 Observe that xp = 0 is a particular solution to Ax = b, and that 0 −1 1 3 , 2 1 0 0 1 is a basis for nullspace(A). Therefore, the set of solution vectors obtained does indeed take the form (4.9.3). 1 1 −2 −3 3 −1 −7 2 . We quickly reduce this augmented 11. The augmented matrix for this linear system is 1 0 1 1 2 2 −4 −6 matrix to row-echelon form: 1 1 −2 −3 1 0 1 − 11 . 4 4 00 1 1 There are no free variables in the solution set (since none of the ﬁrst three columns is unpivoted), and we ﬁnd 2 the solution set by back-substitution: −3 . It is easy to see that this is indeed a particular solution: 1 2 xp = −3 . Since the row-echelon form of A has three nonzero rows, rank(A) = 3. Thus, nullity(A) = 0. 1 Hence, nullspace(A) = {0}. Thus, the only term in the expression (4.9.3) that appears in the solution is xp , and this is precisely the unique solution we obtained in the calculations above. 12. By inspection, we see that a particular solution to this (homogeneous) linear system is xp = 0. We quickly reduce this augmented matrix to row-echelon form: 1 1 −1 5 0 0 1 −1 7 0 . 2 2 00 000 A solution (x, y, z, w) to the system will have free variables corresponding to the third and fourth columns: 1 7 z = t and w = s. Then y − 2 z + 7 w = 0 requires that y = 1 t − 2 s, and x + y − z + 5w = 0 requires that 2 2 1 7 1 3 x = t − ( 2 t − 2 s) − 5s = 2 t − 2 s. Thus, the solution set is 1 1 3 −2 2t − 3s 2 2 1 1 −7 t − 7s 2 2 : s, t ∈ R = t 2 + s 2 : s, t ∈ R . 0 t 1 s 0 1 3 1 −2 2 −7 1 Since the vectors 2 and 2 form a basis for nullspace(A), our solutions to take the proper form 0 1 0 1 given in (4.9.3). 310 13. By the Rank-Nullity Theorem, rank(A) = 7 − nullity(A) = 7 − 4 = 3, and hence, colspace(A) is 3-dimensional. But since A has three rows, colspace(A) is a subspace of R3 . Therefore, since the only 3-dimensional subspace of R3 is R3 itself, we conclude that colspace(A) = R3 . Now rowspace(A) is also 3-dimensional, but it is a subspace of R5 . Therefore, it is not accurate to say that rowspace(A) = R3 . 14. By the Rank-Nullity Theorem, rank(A) = 4 − nullity(A) = 4 − 0 = 4, so we conclude that rowspace(A) is 4-dimensional. Since rowspace(A) is a subspace of R4 (since A contains four columns), and it is 4-dimensional, we conclude that rowspace(A) = R4 . Although colspace(A) is 4-dimensional, colspace(A) is a subspace of R6 , and therefore it is not accurate to say that colspace(A) = R4 . 15. If rowspace(A) = nullspace(A), then we know that rank(A) = nullity(A). Therefore, rank(A)+ nullity(A) must be even. But rank(A)+ nullity(A) is the number of columns of A. Therefore, A contains an even number of columns. 16. We know that rank(A) + nullity(A) = 7. But since A only has ﬁve rows, rank(A) ≤ 5. Therefore, nullity(A) ≥ 2. However, since nullspace(A) is a subspace of R7 , nullity(A) ≤ 7. Therefore, 2 ≤ nullity(A) ≤ 7. There are many examples of a 5 × 7 matrix A with nullity(A) = 2; one example is 1000000 0 1 0 0 0 0 0 0 0 1 0 0 0 0 . The only 5 × 7 matrix with nullity(A) = 7 is 05×7 , the 5 × 7 zero matrix. 0 0 0 1 0 0 0 0000100 17. We know that rank(A) + nullity(A) = 8. But since A only has three rows, rank(A) ≤ 3. Therefore, nullity(A) ≥ 5. However, since nullspace(A) is a subspace of R8 , nullity(A) 8. Therefore, 5 ≤ nullspace(A) ≤ 8. There are many examples of a 3 × 8 matrix A with nullity(A) = 5; one example is 10000000 0 1 0 0 0 0 0 0 . The only 3 × 8 matrix with nullity(A) = 8 is 03×8 , the 3 × 8 zero matrix. 00100000 18. If B x = 0, then AB x = A0 = 0. This observation shows that nullspace(B ) is a subspace of nullspace(AB ). On the other hand, if AB x = 0, then B x = (A−1 A)B x = A−1 (AB )x = A−1 0 = 0, so B x = 0. Therefore, nullspace(AB ) is a subspace of nullspace(B ). As a result, since nullspace(B ) and nullspace(AB ) are subspaces of each other, they must be equal: nullspace(AB ) = nullspace(B ). Therefore, nullity(AB ) = nullity(B ). Solutions to Section 4.10 True-False Review: 1. TRUE. This follows from the equivalence of (a) and (m) in the Invertible Matrix Theorem. 2. FALSE. If the matrix has n linearly independent rows, then by the equivalence of (a) and (m) in the Invertible Matrix Theorem, such a matrix would be invertible. But if that were so, then by part (j) of the Invertible Matrix Theorem, such a matrix would have to have n linearly independent columns. 3. FALSE. If the matrix has n linearly independent columns, then by the equivalence of (a) and (j) in the Invertible Matrix Theorem, such a matrix would be invertible. But if that were so, then by part (m) of the Invertible Matrix Theorem, such a matrix would have to have n linearly independent rows. 4. FALSE. An n × n matrix A with det(A) = 0 is not invertible by part (g) of the Invertible Matrix Theorem. Therefore, by the equivalence of (a) and (l) in the Invertible Matrix Theorem, the columns of A do not form a basis for Rn . 311 5. TRUE. If rowspace(A) = Rn , then by the equivalence of (a) and (n) in the Invertible Matrix Theorem, A is not invertible. Therefore, A is not row-equivalent to the identity matrix. Since B is row-equivalent to A, then B is not row-equivalent to the identity matrix, and therefore, B is not invertible. Hence, by part (k) of the Invertible Matrix Theorem, we conclude that colspace(B ) = Rn . 6. FALSE. If nullspace(A) = {0}, then A is invertible by the equivalence of (a) and (c) in the Invertible Matrix Theorem. Since E is an elementary matrix, it is also invertible. Therefore, EA is invertible. By part (g) of the Invertible Matrix Theorem, det(EA) = 0, contrary to the statement given. 7. FALSE. The matrix [A|B ] has 2n columns, but only n rows, and therefore rank([A|B ]) ≤ n. Hence, by the Rank-Nullity Theorem, nullity([A|B ]) ≥ n > 0. 8. TRUE. The ﬁrst and third rows are proportional, and hence statement (m) in the Invertible Matrix Theorem is false. Therefore, by the equivalence with (a), the matrix is not invertible. 0100 1 0 0 0 9. FALSE. For instance, the matrix 0 0 0 1 is of the form given, but satisﬁes any (and all) of the 0010 statements of the Invertible Matrix Theorem. 121 10. FALSE. For instance, the matrix 3 6 2 is of the form given, but has a nonzero determinant, 100 and so by part (g) of the Invertible Matrix Theorem, it is invertible. Solutions to Section 4.11 1. FALSE. The converse of this statement is true, but for the given statement, many counterexamples exist. For instance, the vectors v = (1, 1) and w = (1, 0) in R2 are linearly independent, but they are not orthogonal. 2. FALSE. We have k v, k w = k v, k w = k k w, v = k 2 w, v = k 2 v, w , where we have used the axioms of an inner product to carry out these steps. Therefore, the result conﬂicts with the given statement, which must therefore be a false statement. 3. TRUE. We have c1 v1 + c2 v2 , w = c1 v1 , w + c2 v2 , w = c1 v1 , w + c2 v2 , w = c1 · 0 + c2 · 0 = 0. 4. TRUE. We have x + y, x − y = x, x − x, y + y, x − y, y = x, x − y, y = | x| 2 − | y| 2 . This will be negative if and only if | x| 2 < | y| 2 , and since | x| and | y| are nonnegative real numbers, | x| 2 < | y| 2 if and only if | x| < | y| . 5. FALSE. For example, if V is the inner product space of integrable functions on (−∞, ∞), then the formula b f, g = f (t)g (t)dt a 312 is a valid any product for any choice of real numbers a < b. See also Problem 9 in this section, in which a “non-standard” inner product on R2 is given. 6. TRUE. The angle between the vectors −2v and −2w is cos θ = (−2v) · (−2w) (−2)2 (v · w) v·w = = , | − 2v| | − 2w| (−2)2 | v| | w| | v| | w| and this is the angle between the vectors v and w. 7. FALSE. This deﬁnition of p, q will not satisfy the requirements of an inner product. For instance, if we take p = x, then p, p = 0, but p = 0. Problems: √ √ 1. v, w = 8, ||v|| = 3 3, ||w|| = 7. Hence, cos θ = 8 v, w = √ =⇒ θ ≈ 0.95 radians. ||v||||w|| 3 21 π π sin2 xdx x sin xdx = π, ||f || = 2. f , g = 0 π , ||g || = 2 = 0 cos θ = x2 dx = 0 π3 . Hence, 3 √ π π 1/2 2 π π3 1/2 = 6 ≈ 0.68 radians. π 3 3. v, w = (2 + i)(−1 − i) + (3 − 2i)(1 + 3i) + (4 + i)(3 + i) = 19 + 11i. ||v|| = √ ||w|| = w, w = 22. v, v = √ 35. 4. Let A, B, C ∈ M2 (R). (1): A, A = a2 + a2 + a2 + a2 ≥ 0, and A, A = 0 ⇐⇒ a11 = a12 = a21 = a22 = 0 ⇐⇒ A = 0. 11 12 21 22 (2): A, B = a11 b11 + a12 b12 + a21 b21 + a22 b22 = b11 a11 + b12 a12 + b21 a21 + b22 a22 = B , A . (3): Let k ∈ R. k A, B = ka11 b11 + ka12 b12 + ka21 b21 + ka22 b22 = k (a11 b11 + a12 b12 + a21 b21 + a22 b22 ) = k A, B . (4): (A + B ), C = (a11 + b11 )c11 + (a12 + b12 )c12 + (a21 + b21 )c21 + (a22 + b22 )c22 = (a11 c11 + b11 c11 ) + (a12 c12 + b12 c12 ) + (a21 c21 + b21 c21 ) + (a22 c22 + b22 c22 ) = (a11 c11 + a12 c12 + a21 c21 + a22 c22 ) + (b11 c11 + b12 c12 + b21 c21 + b22 c22 = A, C + B , C . 5. We need only demonstrate one example showing that some property of an inner product is violated by the 1 0 given formula. Set A = . Then according to the given formula, we have A, A = −2, violating 0 −1 the requirement that u, u ≥ 0 for all vectors u. 6. A, B = 2 · 3 + (−1)1 + 3(−1) + 5 · 2 = 12. √ ||A|| = A, A = 2 · 2 + (−1)(−1) + 3 · 3 + 5 · 5 = 39. √ ||B || = B , B = 3 · 3 + 1 · 1 + (−1)(−1) + 2 · 2 = 15. √ √ 7. A, B = 13, ||A|| = 33, ||B || = 7. 313 8. Let p1 , p2 , p3 ∈ P1 where p1 (x) = a + bx, p2 (x) = c + dx, and p3 (x) = e + f x. Deﬁne p1 , p2 = ac + bd. (8.1) The properties 1 through 4 of Deﬁnition 4.11.3 must be veriﬁed. (1): p1 , p1 = a2 + b2 ≥ 0 and p1 , p1 = 0 ⇐⇒ a = b = 0 ⇐⇒ p1 (x) = 0. (2): p1 , p2 = ac + bd = ca + db = p2 , p1 . (3): Let k ∈ R. k p1 , p2 = kac + kbd = k (ac + bd) = k p1 , p2 . (4): p1 + p2 , p3 = (a + c)e + (b + d)f = ae + ce + bf + df = (ae + bf ) + (ce + df ) = p1 , p 3 + p 2 , p 3 . Hence, the mapping deﬁned by (8.1) is an inner product in P1 . 9. Property 1: u, u = 2u1 u1 + u1 u2 + u2 u1 + 2u2 u2 = 2u2 + 2u1 u2 + 2u2 = (u1 + u2 )2 + u2 + u2 ≥ 0. 1 2 1 2 u, u = 0 ⇐⇒ (u1 + u2 )2 + u2 + u2 = 0 ⇐⇒ u1 = 0 = u2 ⇐⇒ u = 0. 1 2 Property 2: u, v = 2u1 v1 + u1 v2 + u2 v1 + 2u2 v2 = 2u2 v2 + u2 v1 + u1 v2 + 2u1 v1 = v, u . Property 3: k u, v = k (2u1 v1 + u1 v2 + u2 v1 + 2u2 v2 ) = 2ku1 v1 + ku1 v2 + ku2 v1 + 2ku2 v2 = 2(ku1 )v1 + (ku1 )v2 + (ku2 )v1 + 2(ku2 )v2 = k u, v = 2ku1 v1 + ku1 v2 + ku2 v1 + 2ku2 v2 = 2u1 (kv1 ) + u1 (kv2 ) + u2 (kv1 ) + 2u2 (kv2 ) = u, k v . Property 4: (u + v), w = (u1 + v1 , u2 + v2 ), (w1 , w2 ) = 2(u1 + v1 )w1 + (u1 + v1 )w2 + (u2 + v2 )w1 + 2(u2 + v2 )w2 = 2u1 w1 + 2v1 w1 + u1 w2 + v1 w2 + u2 w1 + v2 w1 + 2u2 w2 + 2v2 w2 = 2u1 w1 + u1 w2 + u2 w1 + 2u2 w2 + 2v1 w1 + v1 w2 + v2 w1 + 2v2 w2 = u, w + v, w . Therefore u, v = 2u1 v1 + u1 v2 + u2 v1 + 2u2 v2 deﬁnes an inner product on R2 . 10. (a) Using the deﬁned inner product: v, w = (1, 0), (−1, 2) = 2 · 1(−1) + 1 · 2 + 0(−1) + 2 · 0 · 2 = 0. (b) Using the standard inner product: v, w = (1, 0), (−1, 2) = 1(−1) + 0 · 2 = −1 = 0. 11. (a) Using the deﬁned inner product: v, w = 2 · 2 · 3 + 2 · 6 + (−1)3 + 2(−1)6 = 9 = 0. (b) Using the standard inner product: v, w = 2 · 3 + (−1)6 = 0. 12. (a) Using the deﬁned inner product: v, w = 2 · 1 · 2 + 1 · 1 + (−2) · 2 + 2 · (−2) · 1 = −3 = 0. (b) Using the standard inner product: v, w = 1 · 2 + (−2)(1) = 0. 314 13. (a) Show symmetry: v, w = w, v . v, w = (v1 , v2 ), (w1 , w2 ) = v1 w1 − v2 w2 = w1 v1 − w2 v2 = (w1 , w2 ), (v1 , v2 ) = w, v . (b) Show k v, w = k v, w = v, k w . Note that k v = k (v1 , v2 ) = (kv1 , kv2 ) and k w = k (w1 , w2 ) = (kw1 , kw2 ). k v, w = (kv1 , kv2 ), (w1 , w2 ) = (kv1 )w1 − (kv2 )w2 = k (v1 w1 − v2 w2 ) = k (v1 , v2 ), (w1 , w2 ) = k v, w . Also, v, k w = (v1 , v2 ), (kw1 , kw2 ) = v1 (kw1 ) − v2 (kw2 ) = k (v1 w1 − v2 w2 ) = k (v1 , v2 ), (w1 , w2 ) = k v, w . (c) Show (u + v), w = u, w + v, w . Let w = (w1 , w2 ) and note that u + v = (u1 + v1 , u2 + v2 ). (u + v), w = (u1 + v1 , u2 + v2 ), (w1 , w2 ) = (u1 + v1 )w1 − (u2 + v2 )w2 = u1 w1 + v1 w1 − u2 w2 − v2 w2 = u1 w1 − u2 w2 + v1 w1 − v2 w2 = (u1 , v2 ), (w1 , w2 ) + (v1 , v2 ), (w1 , w2 ) = u, w + v, w . Property 1 fails since, for example, u, u < 0 whenever |u2 | > |u1 |. 2 2 2 2 14. v, v = 0 =⇒ (v1 , v2 ), (v1 , v2 ) = 0 =⇒ v1 − v2 = 0 =⇒ v1 = v2 =⇒ |v1 | = |v2 |. Thus, in this space, 2 null vectors are given by {(v1 , v2 ) ∈ R : |v1 | = |v2 |} or equivalently, v = r(1, 1) or v = s(1, −1) where r, s ∈ R. 2 2 2 2 15. v, v < 0 =⇒ (v1 , v2 ), (v1 , v2 ) < 0 =⇒ v1 − v2 < 0 =⇒ v1 < v2 . In this space, timelike vectors are 2 2 2 given by {(v1 , v2 ) ∈ R : v1 < v2 }. 2 2 2 2 16. v, v > 0 =⇒ (v1 , v2 ), (v1 , v2 ) > 0 =⇒ v1 − v2 > 0 =⇒ v1 > v2 . In this space, spacelike vectors are 2 2 given by {(v1 , v2 ) ∈ R2 : v1 > v2 }. 17. y Null Vectors Null Vectors Timelike Vectors Spacelike Vectors Spacelike Vectors x Spacelike Vectors Spacelike Vectors Timelike Vectors Null Vectors Null Vectors Figure 65: Figure for Problem 17 18. Suppose that some ki ≤ 0. If ei denotes the standard basis vector in Rn with 1 in the ith position and zeros elsewhere, then the given formula yields ei , ei = ki ≤ 0, violating the ﬁrst axiom of an inner product. Therefore, if the given formula deﬁnes a valid inner product, then we must have that ki > 0 for all i. Now we prove the converse. Suppose that each ki > 0. We verify the axioms of an inner product for the given proposed inner product. We have 2 2 2 v, v = k1 v1 + k2 v2 + · · · + kn vn ≥ 0, 315 and we have equality if and only if v1 = v2 = · · · = vn = 0. For the second axiom, we have w, v = k1 w1 v1 + k2 w2 v2 + · · · + kn wn vn = k1 v1 w1 + k2 v2 w2 + · · · + kn vn wn = v, w . For the third axiom, we have k v, w = k1 (kv1 )w1 + k2 (kv2 )w2 + · · · + kn (kvn )wn = k [k1 v1 w1 + k2 v2 w2 + · · · + kn vn wn ] = k v, w . Finally, for the fourth axiom, we have u + v, w = k1 (u1 + v1 )w1 + k2 (u2 + v2 )w2 + · · · + kn (un + vn )wn = [k1 u1 w1 + k2 u2 w2 + · · · + kn un wn ] + [k1 v1 w1 + k2 v2 w2 + · · · + kn vn wn ] = u, w + v, w . 19. We have v, 0 = v, 0 + 0 = 0 + 0, v = 0, v + 0, v = v, 0 + v, 0 = 2 v, 0 , which implies that v, 0 = 0. 20. (a) For all v, w ∈ V . ||v + w||2 = (v + w), (v + w) = v, v + w + w, v + w = v + w, v + v + w, w = v, v + w, v + v, w = v, v + v, w + w, v by Property 4 by Property 2 + w, w by Property 4 + w, w by Property 2 = ||v||2 + 2 v, w + ||w||2 . (b) This follows immediately by substituting v, w = 0 in the formula given in part (a). (c) (i) From part (a), it follows that ||v + w||2 = ||v||2 + 2 v, w + ||w||2 and ||v − w||2 = ||v + (−w)||2 = ||v||2 + 2 v, −w + || − w||2 = ||v||2 − 2 v, w + ||w||2 . Thus, ||v + w||2 − ||v − w||2 = ||v||2 + 2 v, w + ||w||2 − ||v||2 − 2 v, w + ||w||2 = 4 v, w . (ii) ||v + w||2 + ||v − w||2 = ||v||2 + 2 v, w + ||w||2 + ||v||2 − 2 v, w + ||w||2 = 2||v||2 + 2||w||2 = 2 ||v||2 + ||w||2 . 21. For all v, w ∈ V and vi , wi ∈ C. ||v + w||2 = v + w, v + w = v, v + w + w, v + w by Property 4 = v + w, v + v + w, w by Property 2 = v, v + w, v + v, w + w, w by Property 4 = v, v + w, v + v, w + w, w = v, v + w, w + v, w + v, w = ||v||2 + ||w||2 + 2Re{ v, w } = ||v||2 + 2Re{ v, w } + ||v||2 . Solutions to Section 4.12 316 1. TRUE. An orthonormal basis is simply an orthogonal basis consisting of unit vectors. 2. FALSE. The converse of this statement is true, but for the given statement, many counterexamples exist. For instance, the set of vectors {(1, 1), (1, 0)} in R2 is linearly independent, but does not form an orthogonal set. 3. TRUE. We can verify easily that π cos t sin tdt = 0 sin2 t π | = 0, 20 which means that {cos x, sin x} is an orthogonal set. Moreover, since they are non-proportional functions, they are linearly independent. Therefore, they comprise an orthogonal basis for the 2-dimensional inner product space span{cos x, sin x}. 4. FALSE. For instance, in R3 we can take the vectors x1 = (1, 0, 0), x2 = (1, 1, 0), and x3 = (0, 0, 1). Applying the Gram-Schmidt process to the ordered set {x1 , x2 , x3 } yields the standard basis {e1 , e2 , e3 }. How1 ever, applying the Gram-Schmidt process to the ordered set {x3 , x2 , x1 } yields the basis {x3 , x2 , ( 2 , − 1 , 0)} 2 instead. 5. TRUE. This is the content of Theorem 4.12.7. 6. TRUE. The vector P(w, v) is a scalar multiple of the vector v, and since v is orthogonal to u, P(w, v) is also orthogonal to u, so that its projection onto u must be 0. 7. TRUE. We have P(w1 + w2 , v) = w1 + w2 , v w1 , v + w2 , v w1 , v w2 , v v= v= v+ v = P(w1 , v) + P(w2 , v). 2 2 2 v v v v2 Problems: 1. (2, −1, 1), (1, 1, −1) = 2 + (−1) + (−1) = 0; (2, −1, 1), (0, 1, 1) = 0 + (−1) + 1 = 0; (1, 1, −1), (0, 1, 1) = 0 + 1 + (−1) = 0. Since each vector in the set is orthogonal to every other vector in the set, the vectors form an orthogonal set. To generate an orthonormal set, we divide each vector by its norm: √ √ ||(2, −1, 1)|| = √4 + 1 + 1 = √6, ||(1, 1, −1)|| = 1 + 1 + 1 = 3, and √ √ ||(0, 1, 1)|| = 0 + 1 + 1 = 2. Thus, the corresponding orthonormal set is: √ √ √ 6 3 2 (2, −1, 1), (1, 1, −1), (0, 1, 1) . 6 3 2 2. (1, 3, −1, 1), (−1, 1, 1, −1) = −1 + 3 + (−1) + (−1) = 0; (1, 3, −1, 1), (1, 0, 2, 1) = 1 + 0 + (−2) + 1 = 0; (−1, 1, 1, −1), (1, 0, 2, 1) = −1 + 0 + 2 + (−1) = 0. Since all vectors in the set are orthogonal to each other, they form an orthogonal set. To generate an orthonormal set, √ divide each vector by its norm: we √ √ ||(1, 3, −1, 1)|| = √+ 9 + 1 + 1 = √ = 2 3, 1 12 ||(−1, 1, 1, −1)|| = 1 + 1 + 1 + 1 = 4 = 2, and √ √ ||(1, 0, 2, 1)|| = 1 + 0 + 4 + 1 = 6. Thus, an orthonormal set is: √ √ 3 1 6 (1, 3, −1, 1), (−1, 1, 1, −1), (1, 0, 2, 1) . 6 2 6 317 3. (1, 2, −1, 0), (1, 0, 1, 2) = 1 + 0 + (−1) + 0 = 0; (1, 2, −1, 0), (−1, 1, 1, 0) = −1 + 2 + (−1) + 0 = 0; (1, 2, −1, 0), (1, −1, −1, 0) = 1 + (−2) + 1 + 0 = 0; (1, 0, 1, 2), (−1, 1, 1, 0) = −1 + 0 + 1 + 0 = 0; (1, 0, 1, 2), (1, −1, −1, 0) = 1 + 0 + (−1) + 0 = 0; (−1, 1, 1, 0), (1, −1, −1, 0) = −1 + (−1) + (−1) = −3. Hence, this is not an orthogonal set of vectors. 4. (1, 2, −1, 0, 3), (1, 1, 0, 2, −1) = 1 + 2 + 0 + 0 + (−3) = 0; (1, 2, −1, 0, 3), (4, 2, −4, −5, −4) = 4 + 4 + 4 + 0 + (−12) = 0; (1, 1, 0, 2, −1), (4, 2, −4, −5, −4) = 4 + 2 + 0 + (−10) + 4 = 0. Since all vectors in the set are orthogonal to each other, they form an orthogonal set. To generate an orthonormal set, we divide each vector by its norm: √ √ ||(1, 2, −1, 0, 3)|| = √1 + 4 + 1 + 0 + 9 = √15, ||(1, 1, 0, 2, −1)|| = 1 + 1 + 0 + 4 + 1 = 7, and√ √ ||(4, 2, −4, −5, −4)|| = 16 + 4 + 16 + 25 + 16 = 77. Thus, an orthonormal set is: √ √ √ 15 7 77 (1, 2, −1, 0, 3), (1, 1, 0, 2, −1), (4, 2, −4, −5, −4) . 15 7 77 5. We require that v1 , v2 = v1 , w = v2 , w = 0. Let w = (a, b, c) where a, b, c ∈ R. v1 , v2 = (1, 2, 3), (1, 1, −1) = 0. v1 , w = (1, 2, 3), (a, b, c) =⇒ a + 2b + 3c = 0. v2 , w = (1, 1, −1), (a, b, c) =⇒ a + b − c = 0. Letting the free variable c = t ∈ R, the system has the solution a = 5t, b = −4t, and c = t. Consequently, {(1, 2, 3), (1, 1, −1), (5t, −4t, t)} will form an orthogonal set whenever t = 0. To determine the corresponding orthonormal set, we must divide each vector by its√ norm: √ √ √ √ √ √ √ ||v1 || = 1 + 4 + 9 = 14, ||v2 || = 1 + 1 + 1 = 3, ||w|| = 25t2 + 16t2 + t2 = 42t2 = |t| 42 = t 42 if t ≥ 0. Setting t = 1, an orthonormal set is: √ √ √ 14 3 42 (1, 2, 3), (1, 1, −1), (5, −4, 1) . 14 3 42 6. (1 − i, 3 + 2i), (2 + 3i, 1 − i) = (1 − i)(2 + 3i) + (3 + 2i)(1 − i) = (1 − i)(2 − 3i) + (3 + 2i)(1 + i) = (2 − 3i − 2i − 3) + (3 + 3i + 2i − 2) = (−1 − 5i) + (1 + √ ) = 0. The vectors are orthogonal. 5i √ ||(1 − i, 3 + 2i)|| = (1 − i)(1 + i) + (3 + 2i)(3 − 2i) = 1 + 1 + 9 + 4 = 15. √ √ ||(2 + 3i, 1 − i)|| = (2 + 3i)(2 − 3i) + (1 − i)(1 + i) = 4 + 9 + 1 + 1 = 15. Thus, the corresponding orthonormal set is: √ √ 15 15 (1 − i, 3 + 2i), (2 + 3i, 1 − i) . 15 15 7. (1 − i, 1 + i, i), (0, i, 1 − i) = (1 − i) · 0 + (1 + i)(−i) + i(1 + i) = 0. (1 − i, 1 + i, i), (−3 + 3i, 2 + 2i, 2i) = (1 − i)(−3 − 3i) + (1 + i)(2 − 2i) + i(−2i) = 0. (0, i, 1 − i), (−3 + 3i, 2 + 2i, 2i) = 0 + i(2 − 2i) + (1 − i)(−2i) = (2i + 2) + (−2i − 2) = 0. Hence, the vectors are orthogonal. To obtain a corresponding orthonormal set, we divide each vector by its norm. √ √ ||(1 − i, 1 + i, i)|| = (1 − i)(1 + i) + (1 + i)(1 − i) + i(−i) = 1 + 1 + 1 + 1 + 1 = 5. 318 √ √ ||(0, i, 1 − i)|| = 0 + i(−i) + (1 − i)(1 + i) = 1 + 1 + 1 = 3. √ √ ||(−3 + 3i, 2 + 2i, 2i)|| = (−3 + 3i)(−3 − 3i) + (2 + 2i)(2 − 2i) + 2i(−2i) = 9 + 9 + 4 + 4 + 4 = 30. Consequently, an orthonormal set is: √ √ √ 5 3 30 (1 − i, 1 + i, i), (0, i, 1 − i), (−3 + 3i, 2 + 2i, 2i) . 5 3 30 8. Let z = a + bi where a, b ∈ R. We require that v, w = 0. (1 − i, 1 + 2i), (2 + i, a + bi) = 0 =⇒ (1 − i)(2 − i) + (1 + 2i)(a − bi) = 0 =⇒ 1 − 3i + a + 2b + (2a − b)i = 0. a + 2b = −1 Equating real parts and imaginary parts from the last equality results in the system: 2a − b = 3. This system has the solution a = 1 and b = −1; hence z = 1 − i. Our desired orthogonal set is given by {(1 − i, 1 + 2i), (2 + i, 1 − i)}. ||(1 − i, 1 + 2i)|| = ||(2 + i, 1 − i)|| = (1 − i)(1 + i) + (1 + 2i)(1 − 2i) = (2 + i)(2 − i) + (1 − i)(1 + i) = √ √ 1+1+1+4= 4+1+1+1= √ √ 7. 7. The corresponding orthonormal set is given by: √ √ 7 7 (1 − i, 1 + 2i), (2 + i, 1 − i) . 7 7 1 1 − cos πx = 0. π −1 −1 1 1 sin πx f1 , f3 = 1, cos πx = = 0. cos πxdx = π −1 −1 1 11 f2 , f3 = sin πx, cos πx = sin πx cos πxdx = sin 2πxdx = 2 −1 −1 vectors are orthogonal. 1 √ ||f1 || = 1dx = [x]1 1 = 2. − 9. f1 , f2 = 1, sin πx = sin πxdx = −1 cos 2πx 4π −1 1 1 sin2 πxdx = ||f2 || = −1 −1 1 1 cos2 πxdx = ||f3 || = −1 Consequently, −1 √ 1 − cos 2πx dx = 2 x 2 1 1 + cos 2πx dx = 2 x 2 1 2 , sin πx, cos πx 2 1 1 · xdx = 10. f1 , f2 = 1, x = −1 f1 , f3 = 1, 3x2 − 1 = 2 1 −1 − −1 + −1 1 sin 2πx 4π 1 sin 2πx 4π 1 = 1. −1 1 = 1. −1 is an orthonormal set of functions on [−1, 1]. x2 2 1 = 0. −1 3x2 − 1 x3 − x dx = 2 2 1 = 0. −1 1 = 0. Thus, the −1 319 f2 , f3 = x, 1 3x2 − 1 = 2 −1 1 ||f1 || = dx = −1 1 2 ||f2 || = x dx = −1 √ [x]1 1 = − x3 3 1 3x2 − 1 1 3x4 x2 dx = − 2 24 2 x· = 0. Thus, the vectors are orthogonal. −1 2. √ 1 6 . 3 = −1 2 3x2 − 1 11 1 9x5 dx = (9x4 − 6x2 + 1)dx = − 2x3 + x ||f3 || = 2 4 −1 45 −1 To obtain a set of orthonormal vectors, we divide each vector by its norm: 1 √ 1 = −1 √ √ √ f1 2 6 10 f2 f3 = f1 , = f2 , = f3 . ||f1 || 2 ||f2 || 2 ||f3 || 2 √ Thus, √ √ 2 6 10 , x, (3x2 − 1) 2 2 4 is an orthonormal set of vectors. 1 11. f1 , f2 = sin πx sin 2πxdx = −1 1 2 1 (cos 3πx − cos πx)dx = 0. −1 1 11 (cos 4πx − cos 2πx)dx = 0. 2 −1 −1 1 1 1 (cos 5πx − cos πx)dx = 0. f2 , f3 = sin 2πx sin 3πxdx = 2 −1 −1 Therefore, {f1 , f2 , f3 } is an orthogonal set. f1 , f3 = sin πx sin 3πxdx = 1 1 sin2 πxdx = ||f1 || = −1 −1 1 1 (1 − cos 2πx)dx = 2 1 sin2 2πxdx = ||f2 || = −1 −1 1 1 sin 2πx x− 2 2π 1 sin 4πx x− 2 4π 1 (1 − cos 4πx)dx = 2 1 1 = 1. −1 1 = 1. −1 1 1 1 sin 6πx (1 − cos 6πx)dx = x− = 1. 2 6π −1 −1 2 −1 Thus, it follows that {f1 , f2 , f3 } is an orthonormal set of vectors on [−1, 1]. sin2 3πxdx = ||f3 || = 1 12. f1 , f2 = cos πx, cos 2πx = cos πx cos 2πxdx = −1 1 f1 , f3 = cos πx, cos 3πx = cos πx cos 3πxdx = −1 1 f2 , f3 = cos 2πx, cos 3πx = 1 2 cos 2πx cos 3πxdx = −1 1 1 2 (cos 3πx + cos πx)dx = 0. −1 1 (cos 4πx + cos 2πx)dx = 0. −1 1 2 1 (cos 5πx + cos πx)dx = 0. −1 Therefore, {f1 , f2 , f3 } is an orthogonal set. 1 cos2 πxdx = ||f1 || = −1 1 cos2 2πxdx = ||f2 || = −1 1 2 1 2 1 (1 + cos 2πx)dx = −1 1 (1 + cos 4πx)dx = −1 1 sin 2πx x+ 2 2π 1 sin 4πx x+ 2 4π 1 = 1. −1 1 = 1. −1 10 . 5 320 1 1 11 1 sin 6πx (1 + cos 6πx)dx = x+ = 1. 2 −1 2 6π −1 −1 Thus, it follows that{f1 , f2 , f3 } is an orthonormal set of vectors on [−1, 1]. ||f3 || = cos2 3πxdx = 13. It is easily veriﬁed that A1 , A2 = 0, A1 , A3 = 0, A2 , A3 = 0. Thus we require a, b, c, d such that A1 , A4 = 0 =⇒ a + b − c + 2d = 0, A2 , A4 = 0 =⇒ −a + b + 2c + d = 0, A3 , A4 = 0 =⇒ a − 3b + 2d = 0. 1 3 Solving this system for a, b, c, and d, we obtain: a = c, b = − c, d = 0. Thus, 2 2 A4 = −1c 2 c 0 3 2c = 2c 3 −1 2 0 =k 3 −1 2 0 where k is any nonzero real number. 14. Let v1 = (1, −1, −1) and v√= (2, 1, −1). √ 2 u1 = v1 = (1, −1, −1), ||u1 || = 1 + 1 + 1 = 3. v2 , u1 = (2, 1, −1), (1, −1, −1) = 2 − 1 + 1 = 2. v2 , u1 1 2 u2 = v2 − u1 = (2, 1, −1) − (1, −1, −1) = (4, 5, −1). ||u1 ||2 3 3 1√ ||u2 || = (16 + 25 + 1)/9 = 42. Hence, an orthonormal basis is: 3 √ √ 3 42 (1, −1, −1), (4, 5, −1) . 3 42 15. Let v1 = (2, 1, −2) and v√= (1, 3, −1). √ 2 u1 = v1 = (2, 1, −2), ||u1 || = 4 + 1 + 4 = 9 = 3. v2 , u1 = (1, 3, −1), (2, 1, −2) = 2 · 1 + 1 · 3 + (−2)(−1) = 7. 7 5 v2 , u1 u1 = (1, 3, −1) − (2, 1, −2) = (−1, 4, 1). u2 = v2 − 2 ||u1 || 9 9 √ 5√ 52 ||u2 || = 1 + 16 + 1 = . Hence, an orthonormal basis is: 9 3 √ 1 2 (2, 1, −2), (−1, 4, 1) . 3 6 16. Let v1 = (−1, 1, 1, 1) and v√= (1, 2, 1, 2). 2 u1 = v1 = (−1, 1, 1, 1), ||u1 || = 1 + 1 + 1 + 1 = 2. v2 , u1 = (1, 2, 1, 2), (−1, 1, 1, 1) = 1(−1) + 2 · 1 + 1 · 1 + 2 · 1 = 4. v2 , u1 u2 = v2 − u1 = (1, 2, 1, 2) − (−1, 1, 1, 1) = (2, 1, 0, 1). ||u1 ||2 √ √ ||u2 || = 4 + 1 + 0 + 1 = 6. Hence, an orthonormal basis is: √ 1 6 (−1, 1, 1, 1), (2, 1, 0, 1) . 2 6 17. Let v1 = (1, 0, −1, 0), v2 = √ , 1, −1, 0) and v√= (−1, 1, 0, 1). (1 3 u1 = v1 = (1, 0, −1, 0), ||u1 || = 1 + 0 + 1 + 0 = 2. , 321 v2 , u1 = (1, 1, −1, 0), (1, 0, −1, 0) = 1 · 1 + 1 · 0 + (−1)(−1) + 0 · 0 = 2. v2 , u1 u2 = v2 − u1 = (1, 1, −1, 0) − (1, 0, −1, 0) = (0, 1, 0, 0). ||u1 ||2 √ ||u2 || = 0 + 1 + 0 + 0 = 1. v3 , u1 = (−1, 1, 0, 1), (1, 0, −1, 0) = (−1)1 + 1 · 0 + 0(−1) + 1 · 0 = −1. v3 , u2 = (−1, 1, 0, 1), (0, 1, 0, 0) = (−1)0 + 1 · 1 + 0 · 0 + 1 · 0 = 1. v3 , u2 1 1 v3 , u1 u1 − u2 = (−1, 1, 0, 1) + (1, 0, −1, 0) − (0, 1, 0, 0) = (−1, 0, −1, 2); u3 = v3 − ||u1 ||2 ||u√ 2 || 2 2 2 1√ 6 ||u3 || = 1+0+1+4= . Hence, an orthonormal basis is: 2 2 √ √ 2 6 (1, 0, −1, 0), (0, 1, 0, 0), (−1, 0, −1, 2) . 2 6 18. Let v1 = (1, 2, 0, 1), v2 = √ , 1, 1, 0) and v3 √ (1, 0, 2, 1). (2 = u1 = v1 = (1, 2, 0, 1), ||u1 || = 1 + 4 + 0 + 1 = 6. v2 , u1 = (2, 1, 1, 0), (1, 2, 0, 1) = 2 · 1 + 1 · 2 + 1 · 0 + 0 · 1 = 4. 2 1 v2 , u1 u1 = (2, 1, 1, 0) − (1, 2, 0, 1) = (4, −1, 3, −2). u2 = v2 − ||u1 ||2 3 3 1√ 1√ ||u2 || = 16 + 1 + 9 + 4 = 30. 3 3 v3 , u1 = (1, 0, 2, 1), (1, 2, 0, 1) = 1 · 1 + 0 · 2 + 2 · 0 + 1 · 1 = 2. 8 . 3 v3 , u1 v3 , u2 1 4 2 u3 = v3 − u1 − u2 = (1, 0, 2, 1) − (1, 2, 0, 1) − (4, −1, 3, −2) = (−1, −1, 3, 3); 2 2 ||u1 || ||u2 √ || 3 15 5 2√ 45 . Hence, an orthonormal basis is: ||u3 || = 1+1+9+9= 5 5 √ √ √ 6 30 5 (1, 2, 0, 1), (4, −1, 3, −2), (−1, −1, 3, 3) . 6 30 10 v3 , u2 = (1, 0, 2, 1), (4/3, −1/3, 1, −2/3) = 1(4/3) + 0(−1/3) + 2 · 1 + 1(−2/3) = 19. Let v1 = (1, 1, −1, 0), v2 = √ 1, 0, 1, 1) and v√= (2, −1, 2, 1). (− 3 u1 = v1 = (1, 1, −1, 0), ||u1 || = 1 + 1 + 1 + 0 = 3. v2 , u1 = (−1, 0, 1, 1), (1, 1, −1, 0) = −1 · 1 + 0 · 1 + 1(−1) + 1 · 0 = −2. v2 , u1 −2 1 u2 = v2 − u1 = (−1, 0, 1, 1) − (1, 1, −1, 0) = (−1, 2, 1, 3). 2 ||u1 || 3 3 √ 1√ 15 ||u2 || = 1+4+1+9= . 3 3 v3 , u1 = (2, −1, 2, 1), (1, 1, −1, 0) = 2 · 1 + −1 · 1 + 2(−1) + 1 · 0 = −1. 1 . 3 v3 , u1 v3 , u2 1 1 4 u3 = v3 − u1 − u2 = (2, −1, 2, 1) + (1, 1, −1, 0) − (−1, 2, 1, 3) = (3, −1, 2, 1); 2 ||u2 ||2 3 15 5 √ ||u1 || 4 15 ||u3 || = . Hence, an orthonormal basis is: 5 √ √ √ 3 15 15 (1, 1, −1, 0), (−1, 2, 1, 3), (3, −1, 2, 1) . 3 15 15 v3 , u2 = (2, −1, 2, 1), (−1/3, 2/3, 1/3, 1) = 2(−1/3) + (−1)(2/3) + 2(1/3) + 1 · 1 = 322 1 −2 1 1 1 . Hence, a basis for rowspace(A) is {(1, −2, 1), (0, 7, 1))}. 20. A row-echelon form of A is 0 7 0 00 The vectors in this basis are not orthogonal. We therefore use the Gram-Schmidt process to determine an orthogonal basis. Let v1 = (1, −2, 1), v2 = (0, 7, 1). Then an orthogonal basis for rowspace(A) is {u1 , u2 }, where 1 13 u1 = (1, −2, 1), u2 = (0, 7, 1) + (1, −2, 1) = (13, 16, 19). 6 6 21. Let v1 = (1 − i, 0, i) and v2 = (1, 1 + i, 0). √ u1 = v1 = (1 − i, 0, i), ||u1 || = (1 − i)(1 + i) + 0 + i(−i) = 3. v2 , u1 = (1, 1 + i, 0), (1 − i, 0, i) = 1(1 + i) + (1 + i)0 + 0(−i) = 1 + i. 1+i v2 , u1 1 u1 = (1, 1 + i, 0) − u2 = v2 − (1 − i, 0, i) = (1, 3 + 3i, 1 − i). ||u1 ||2 3 3 √ 1 21 ||u2 || = 1 + (3 + 3i)(3 − 3i) + (1 − i)(1 + i) = . Hence, an orthonormal basis is: 3 3 √ √ 3 21 (1 − i, 0, i), (1, 3 + 3i, 1 − i) . 3 21 22. Let v1 = (1 + i, i, 2 − i) and v2 = (1 + 2i, 1 − i, i). √ u1 = v1 = (1 + i, i, 2 − i), ||u1 || = (1 + i)(1 − i) + i(−i) + (2 − i)(2 + i) = 2 2. v2 , u1 = (1 + 2i)(1 − i) + (1 − i)(−i) + i(2 + i) = 1 + 2i. v2 , u1 1 1 u2 = v2 − u1 = (1 + 2i, 1 − i, i) − (1 + 2i)(1 + i, i, 2 − i) = (9 + 13i, 10 − 9i, −4 + 5i). 2 ||u1 || 8 8 √ 1 118 ||u2 || = (9 + 13i)(9 − 13i) + (10 − 9i)(10 + 9i) + (−4 + 5i)(−4 − 5i) = . Hence, an orthonormal 8 4 basis is: √ √ 2 118 (1 + i, i, 2 − i), (9 + 13i, 10 − 9i, −4 + 5i) . 4 236 23. Let f1 = 1, f2 = x and f3 = x2 . g1 = f1 = 1; 1 ||g1 ||2 = 1 dx = 1; f2 , g1 = 0 xdx = 0 x2 2 1 = 0 1 . 2 f2 , g1 1 1 g2 = f2 − g1 = x − = (2x − 1). ||g1 ||2 2 2 1 2 1 1 1 1 x3 x2 x 1 dx = x2 − x + dx = − + = . ||g2 ||2 = x− 2 4 3 2 40 12 0 0 1 1 x3 1 f3 , g1 = x2 dx = =. 30 3 0 1 1 1 x4 x3 1 f3 , g2 = x2 x − dx = − = . 2 4 60 12 0 f3 , g1 f3 , g2 1 1 1 g3 = f3 − g1 − g2 = x2 − − x − = (6x2 − 6x + 1). Thus, an orthogonal basis is given ||g1 ||2 ||g2 ||2 3 2 6 by: 1 1 1, (2x − 1), (6x2 − 6x + 1) . 2 6 323 24. Let f1 = 1, f2 = x2 and f3 = x4 for all x in [−1, 1]. 1 1 2 2 f1 , f2 = x2 dx = , f1 , f3 = x4 dx = , and f2 , f3 = 3 5 −1 −1 1 1 2 x4 dx = . dx = 2, ||f2 ||2 = ||f1 ||2 = 5 −1 −1 Let g1 = f1 = 1. x2 , 1 1 f2 , g1 1 g = x2 − · 1 = x2 − = (3x − 1). g2 = f2 − 21 ||g1 || ||1||2 3 3 g3 = f3 − 1 x6 dx = −1 2 . 7 1 x4 , x2 − 3 f3 , g2 x4 , 1 f3 , g1 g− g = x4 − ·1− 12 21 22 2 ||g1 || ||g2 || ||1|| ||x2 − 3 || = x4 − x2 − 1 3 3 1 6x2 + = (35x4 − 30x2 + 3). 7 35 35 Thus, an orthogonal basis is given by: 1, 1 1 (3x2 − 1), (35x4 − 30x2 + 3) . 3 35 ππ 25. Let f1 = 1, f2 = sin x and f3 = cos x for all x in − , . 22 π /2 f1 , f2 = −π/2 π /2 f2 , f3 = π/2 sin xdx = [− cos 2x]−π/2 = 0. Therefore, f1 and f2 are orthogonal. sin x cos xdx = −π/2 nal. 1 2 π /2 f1 , f3 = −π/2 π /2 1 sin 2xdx = − cos 2x 4 −π/2 π /2 = 0. Therefore, f2 and f3 are orthogo−π/2 π/2 cos xdx = [sin x]−π/2 = 2. π /2 Let g1 = f1 = 1 so that ||g1 ||2 = dx = π . −π/2 π /2 2 π /2 1 − cos 2x π dx = . 2 2 −π/2 −π/2 f3 , g1 f3 , g2 2 1 g3 = f3 − g1 − g2 = cos x − · 1 − 0 · sin x = (π cos x − 2). Thus, an orthogonal basis for ||g1 ||2 ||g2 ||2 π π the subspace of C 0 [−π/2, π/2] spanned by {1, sin x, cos x} is: g2 = f2 = sin x, and ||g2 ||2 = sin xdx = 1, sin x, 26. Given A1 = 1 −1 2 1 and A2 = 2 −3 4 1 1 (π cos x − 2) . π . Using the Gram-Schmidt procedure: 1 −1 , A2 , B1 = 10 + 6 + 24 + 5 = 45, and ||B1 ||2 = 5 + 2 + 12 + 5 = 24. 2 1 1 15 1 −1 A2 , B1 2 −3 −9 8 8 B1 = − = . Thus, an orthogonal basis for the B2 = A2 − 1 4 1 2 1 −7 ||B1 ||2 8 4 8 subspace of M2 (R) spanned by A1 and A2 is: B1 = 1 −1 2 1 , 1 8 1 4 −9 8 −7 8 . 324 27. Given A1 = 0 1 1 0 , A2 = 0 1 1 1 1 1 and A3 = 1 0 . Using the Gram-Schmidt procedure: 1 , A2 , B1 = 5, and ||B1 ||2 = 5. 0 A2 , B1 01 01 00 B1 = − = . B2 = A2 − 11 10 01 ||B1 ||2 2 Also, A3 , B1 = 5, A3 , B2 = 0, and ||B2 || = 5, so that A3 , B2 A3 , B1 11 01 B1 − B2 = − −0 B3 = A3 − 10 10 ||B1 ||2 ||B2 ||2 basis for the subspace of M2 (R) spanned by A1 , A2 , and A3 is: B1 = 0 1 0 1 1 0 , 1 0 0 0 , 0 0 0 0 0 1 0 1 = 1 0 0 0 . Thus, an orthogonal , which is the subspace of all symmetric matrices in M2 (R). 28. Given p1 (x) = 1 − 2x + 2x2 and p2 (x) = 2 − x − x2 . Using the Gram-Schmidt procedure: q1 = 1 − 2x + 2x2 , p2 , q1 = 2 · 1 + (−1)(−2) + (−1)2 = 2, ||q1 ||2 = 12 + (−2)2 + 22 = 9. So, 2 1 p2 , q 1 q1 = 2 − x − x2 − (1 − 2x + 2x2 ) = (16 − 5x − 13x2 ). Thus, an orthogonal basis for the q 2 = p2 − ||q1 ||2 9 9 subspace spanned by p1 and p2 is {1 − 2x + 2x2 , 16 − 5x − 13x2 }. 29. Given p1 (x) = 1 + x2 , p2 (x) = 2 − x + x3 , and p3 (x) = −x + 2x2 . Using the Gram-Schmidt procedure: q1 = 1 + x2 , p2 , q1 = 2 · 1 + (−1)(0) + 0 · 1 + 1 · 2 = 2, and q1 2 = 12 + 12 = 2. So, p2 , q 1 q 2 = p2 − q1 = 2 − x + x3 − (1 + x2 ) = 1 − x − x2 + x3 . Also, ||q1 ||2 p3 , q1 = 0 · 1 + (−1)0 + 2 · 1 + 02 = 2 p3 , q2 = 0 · 1 + (−1)2 + 2(−1) + 0 · 1 = −1, and ||q2 ||2 = 12 + (−1)2 + (−1)2 + 12 = 4 so that p3 , q 1 p3 , q 2 1 1 q 3 = p3 − q− q = −x + 2x2 − (1 + x2 ) + (1 − x − x2 + x3 ) = (−3 − 5x + 3x2 + x3 ). Thus, an 21 22 ||q1 || ||q2 || 4 4 orthogonal basis for the subspace spanned by p1 , p2 , and p3 is {1 + x2 , 1 − x − x2 + x3 , −3 − 5x + 3x2 + x3 }. 30. {u1 , u2 , v} is a linearly independent set of vectors, and u1 , u2 = 0. If we let u3 = v + λu1 + µu2 , then it must be the case that u3 , u1 = 0 and u3 , u2 = 0. u3 , u1 = 0 =⇒ v + λu1 + µu2 , u1 = 0 =⇒ v, u1 + λ u1 , u1 + µ u2 , u1 = 0 =⇒ v, u1 + λ u1 , u1 + µ · 0 = 0 v, u1 =⇒ λ = − . ||u1 ||2 u3 , u2 = 0 =⇒ v + λu1 + µu2 , u2 = 0 =⇒ v, u2 + λ u1 , u2 + µ u2 , u2 = 0 =⇒ v, u2 + λ · 0 + µ u2 , u2 = 0 v, u2 =⇒ µ = − . ||u2 ||2 v, u1 v, u2 Hence, if u3 = v − u− u2 , then {u1 , u2 , u3 } is an orthogonal basis for the subspace spanned 21 ||u1 || ||u2 ||2 by {u1 , u2 , v}. 31. It was shown in Remark 3 following Deﬁnition 4.12.1 that each vector ui is a unit vector. Moreover, for i = j, 1 1 1 ui , uj = vi , vj = vi , vj = 0, vi vj vi vj since vi , vj = 0. Therefore {u1 , u2 , . . . , uk } is an orthonormal set of vectors. 325 32. Set z = x − P(x, v1 ) − P(x, v2 ) − · · · − P(x, vk ). To verify that z is orthogonal to vi , we compute the inner product of z with vi : z, vi = x − P(x, v1 ) − P(x, v2 ) − · · · − P(x, vk ), vi = x, vi − P(x, v1 ), vi − P(x, v2 ), vi − · · · − P(x, vk ), vi . Since P(x, vj ) is a multiple of vj and the set {v1 , v2 , . . . , vk } is orthogonal, all of the subtracted terms on the right-hand side of this expression are zero except P(x, vi ), vi = x, vi x, vi vi , vi = vi , vi = x, vi . vi 2 vi 2 Therefore, z, vi = x, vi − x, vi = 0, which implies that z is orthogonal to vi . 33. We must show that W ⊥ is closed under addition and closed under scalar multiplication. Closure under addition: Let v1 and v2 belong to W ⊥ . This means that v1 , w = v2 , w = 0 for all w ∈ W . Therefore, v1 + v2 , w = v1 , w + v2 , w = 0 + 0 = 0 for all w ∈ W . Therefore, v1 + v2 ∈ W ⊥ . Closure under scalar multiplication: Let v belong to W ⊥ and let c be a scalar. This means that v, w = 0 for all w ∈ W . Therefore, cv, w = c v, w = c · 0 = 0, which shows that cv ∈ W ⊥ . 34. In this case, W ⊥ consists of all (x, y, z ) ∈ R3 such that (x, y, z ), (r, r, −r) = 0 for all r ∈ R. That is, rx + ry − rz = 0. In particular, we must have x + y − z = 0. Therefore, W ⊥ is the plane x + y − z = 0, which can also be expressed as W ⊥ = span{(−1, 0, 1), (0, 1, 1)}. 35. In this case, W ⊥ consists of all (x, y, z, w) ∈ R4 such that (x, y, z, w), (0, 1, −1, 3) = (x, y, z, w), (1, 0, 0, 3) = 0. This requires that y − z + 3w = 0 and x + 3w = 0. We can associated an augmented matrix with this 0 1 −1 3 0 system of linear equations: . Note that z and w are free variables: z = s and w = t. 10 030 Then y = s − 3t and x = −3t. Thus, W ⊥ = {(−3t, s−3t, s, t) : s, t ∈ R} = {t(−3, −3, 0, 1)+s(0, 1, 1, 0) : s, t ∈ R} = span{(−3, −3, 0, 1), (0, 1, 1, 0)}. xy zw 36. In this case, W ⊥ consists of all 2 × 2 matrices The set of symmetric matrices is spanned by the matrices that are orthogonal to all symmetric matrices. 1 0 0 0 , 0 1 1 0 0 0 , 0 1 . Thus, we must have xy zw , 1 0 0 0 = xy zw , 0 1 1 0 = xy zw , 0 0 0 1 = 0. 326 Thus, x = 0, y + z = 0 and w = 0. Therefore W⊥ = −z 0 0 z :z∈R 0 −1 1 0 = span , which is precisely the set of 2 × 2 skew-symmetric matrices. 37. Suppose v belongs to both W and W ⊥ . Then v, v = 0 by deﬁnition of W ⊥ , which implies that v = 0 by the ﬁrst axiom of an inner product. Therefore W and W ⊥ can contain no common elements aside from the zero vector. 38. Suppose that W1 is a subset of W2 and let v be a vector in (W2 )⊥ . This means that v, w2 = 0 for all w2 ∈ W2 . In particular, v is orthogonal to all vectors in W1 (since W1 is merely a subset of W2 ). Thus, v ∈ (W1 )⊥ . Hence, we have shown that every vector belonging to (W2 )⊥ also belongs to (W1 )⊥ . 39. (a) Using technology we ﬁnd that π π sin nx dx = 0, π cos nx dx = 0, −π sin nx cos mx dx = 0. −π Further, for m = n, −π π π sin nx sin mx dx = 0 and cos nx cos mx dx = 0. −π −π Consequently the given set of vectors is orthogonal on [−π, π ]. (b) Multiplying (4.12.7) by cos mx and integrating over [−π, π ] yields π f (x) cos mx dx = −π 1 a0 2 π π ∞ cos mx dx + −π (an cos nx + bn sin nx) cos mx dx. −π n=1 Assuming that interchange of the integral and inﬁnite summation is permissible, this can be written π f (x) cos mx dx = −π which reduces to 1 a0 2 ∞ π n=1 π −π cos mx dx + −π π f (x) cos mx dx = −π 1 a0 2 (an cos nx + bn sin nx) cos mx dx. π π cos2 mx dx cos mx dx + am −π −π where we have used the results from part (a). When m = 0, this gives π f (x) dx = −π 1 a0 2 π dx = πa0 =⇒ a0 = −π 1 π π f (x) dx, −π whereas for m = 0, π π cos2 mx dx = πam =⇒ am = f (x) cos mx dx = am −π −π 1 π π f (x) cos mx dx. −π (c) Multiplying (4.12.7) by sin(mx), integrating over [−π, π ], and interchanging the integration and summation yields π 1 f (x) sin mx dx = a0 2 −π ∞ π π n=1 −π sin mx dx + −π (an cos nx + bn sin nx) sin mx dx. 327 Using the results from (a), this reduces to π π sin2 mx dx = πbn =⇒ bn = f (x) sin mx dx = bn −π −π 1 π π f (x) sin mx dx. −π (d) The Fourier coeﬃcients for f are a0 = bn = π 1 π xdx = 0, an = −π π 1 π x sin nx dx = − −π The Fourier series for f is 1 π π x cos nx dx = 0, −π 2 2 cos nπ = (−1)n+1 . n n ∞ 2 (−1)n+1 sin nx. n n=1 (e) The approximations using the ﬁrst term, the ﬁrst three terms, the ﬁrst ﬁve terms, and the ﬁrst ten terms in the Fourier series for f are shown in the accompanying ﬁgures. These ﬁgures suggest that the Fourier series is converging to the function f (x) at all points in the interval (−π, π ). S3(x) 3 2 1 x 3 2 1 1 2 3 1 2 3 Figure 66: Figure for Problem 39(e) - 3 terms included 328 S5(x) 3 2 1 3 2 1 1 2 3 x 1 2 3 Figure 67: Figure for Problem 39(e) - 5 terms included S10(x) 3 2 1 3 2 1 1 2 3 x 1 2 3 Figure 68: Figure for Problem 39(e) - 10 terms included Solutions to Section 4.13 Problems: 1. Write v = (a1 , a2 , a3 , a4 , a5 ) ∈ R5 . Then we have (r + s)v = (r + s)(a1 , a2 , a3 , a4 , a5 ) = ((r + s)a1 , (r + s)a2 , (r + s)a3 , (r + s)a4 , (r + s)a5 ) = (ra1 + sa1 , ra2 + sa2 , ra3 + sa3 , ra4 + sa4 , ra5 + sa5 ) = (ra1 , ra2 , ra3 , ra4 , ra5 ) + (sa1 , sa2 , sa3 , sa4 , sa5 ) = r(a1 , a2 , a3 , a4 , a5 ) + s(a1 , a2 , a3 , a4 , a5 ) = rv + sv. 329 2. Write v = (a1 , a2 , a3 , a4 , a5 ) and w = (b1 , b2 , b3 , b4 , b5 ) in R5 . Then we have r(v + w) = r((a1 , a2 , a3 , a4 , a5 ) + (b1 , b2 , b3 , b4 , b5 )) = r(a1 + b1 , a2 + b2 , a3 + b3 , a4 + b4 , a5 + b5 ) = (r(a1 + b1 ), r(a2 + b2 ), r(a3 + b3 ), r(a4 + b4 ), r(a5 + b5 )) = (ra1 + rb1 , ra2 + rb2 , ra3 + rb3 , ra4 + rb4 , ra5 + rb5 ) = (ra1 , ra2 , ra3 , ra4 , ra5 ) + (rb1 , rb2 , rb3 , rb4 , rb5 ) = r(a1 , a2 , a3 , a4 , a5 ) + r(b1 , b2 , b3 , b4 , b5 ) = r v + r w. 3. NO. This set of polynomials is not closed under scalar multiplication. For example, the polynomial 1 p(x) = 2x belongs to the set, but 3 p(x) = 2 x does not belong to the set (since 2 is not an even integer). 3 3 4. YES. This set of polynomials forms a subspace of the vector space P5 . To conﬁrm this, we will check that this set is closed under addition and scalar multiplication: Closure under Addition: Let p(x) = a0 + a1 x + a4 x4 + a5 x5 and q (x) = b0 + b1 x + b4 x4 + b5 x5 be polynomials in the set under consideration (their x2 and x3 terms are zero). Then p(x) + q (x) = (a0 + b0 ) + (a1 + b1 )x + · · · + (a4 + b4 )x4 + (a5 + b5 )x5 is again in the set (since it still has no x2 or x3 terms). So closure under addition holds. Closure under Scalar Multiplication: Let p(x) = a0 + a1 x + a4 x4 + a5 x5 be in the set, and let k be a scalar. Then kp(x) = (ka0 ) + (ka1 )x + (ka4 )x4 + (ka5 )x5 , which is again in the set (since it still has no x2 or x3 terms). So closure under scalar multiplication holds. 5. NO. We can see immediately that the zero vector (0, 0, 0) is not a solution to this linear system (the ﬁrst equation is not satisﬁed by the zero vector), and therefore, we know at once that this set cannot be a vector space. 6. YES. The set of solutions to this linear system forms a subspace of R3 . To conﬁrm this, we will check that this set is closed under addition and scalar multiplication: Closure under Addition: Let (a1 , a2 , a3 ) and (b1 , b2 , b3 ) be solutions to the linear system. This means that 4a1 − 7a2 + 2a3 = 0, 5a1 − 2a2 + 9a3 = 0 4b1 − 7b2 + 2b3 = 0, 5b1 − 2b2 + 9b3 = 0. and Adding the equations on the left, we get 4(a1 + b1 ) − 7(a2 + b2 ) + 2(a3 + b3 ) = 0, so the vector (a1 + b1 , a2 + b2 , a3 + b3 ) satisﬁes the ﬁrst equation in the linear system. Likewise, adding the equations on the right, we get 5(a1 + b1 ) − 2(a2 + b2 ) + 9(a3 + b3 ) = 0, so (a1 + b1 , a2 + b2 , a3 + b3 ) also satisﬁes the second equation in the linear system. Therefore, (a1 + b1 , a2 + b2 , a3 + b3 ) is in the solution set for the linear system, and closure under addition therefore holds. 330 Closure under Scalar Multiplication: Let (a1 , a2 , a3 ) be a solution to the linear system, and let k be a scalar. We have 4a1 − 7a2 + 2a3 = 0, 5a1 − 2a2 + 9a3 = 0, and so, multiplying both equations by k , we have k (4a1 − 7a2 + 2a3 ) = 0, k (5a1 − 2a2 + 9a3 ) = 0, or 4(ka1 ) − 7(ka2 ) + 2(ka3 ) = 0, 5(ka1 ) − 2(ka2 ) + 9(ka3 ) = 0. Thus, the vector (ka1 , ka2 , ka3 ) is a solution to the linear system, and closure under scalar multiplication therefore holds. 7. NO. This set is not closed under addition. For example, the vectors 1 1 1 1 and −1 1 1 1 both belong to the set (their entries are all nonzero), but 1 1 1 1 + −1 1 1 1 0 2 = 2 2 , which does not belong to the set (some entries are zero, and some are nonzero). So closure under addition fails, and therefore, this set does not form a vector space. 12 forms a subspace of M2 (R). To 22 conﬁrm this, we will check that this set is closed under addition and scalar multiplication: 8. YES. The set of 2 × 2 real matrices that commute with C = Closure under Addition: Let A and B be 2 × 2 real matrices that commute with C . That is, AC = CA and BC = CB . Then (A + B )C = AC + BC = CA + CB = C (A + B ), so A + B commutes with C , and therefore, closure under addition holds. Closure under Scalar Multiplication: Let A be a 2 × 2 real matrix that commutes with C , and let k be a scalar. Then since AC = CA, we have (kA)C = k (AC ) = k (CA) = C (kA), so kA is still in the set. Thus, the set is also closed under scalar multiplication. 1 9. YES. The set of functions f : [0, 1] → [0, 1] such that f (0) = f 4 = f 1 = f 3 = f (1) = 0 is a 2 4 subspace of the vector space of all functions [0, 1] → [0, 1]. We conﬁrm this by checking that this set is closed under addition and scalar multiplication: Closure under Addition: Let g and h be functions such that g (0) = g 1 4 =g 1 2 =g 3 4 = g (1) = 0 h(0) = h 1 4 =h 1 2 =h 3 4 = h(1) = 0. and 331 Now (g + h)(0) = g (0) + h(0) = 0 + 0 = 0, 1 4 1 2 3 4 (g + h) (g + h) (g + h) =g =g =g 1 4 1 2 3 4 +h +h +h 1 4 1 2 3 4 = 0 + 0 = 0, = 0 + 0 = 0, = 0 + 0 = 0, (g + h)(1) = g (1) + h(1) = 0 + 0 = 0. Thus, g + h belongs to the set, and so the set is closed under addition. Closure under Scalar Multiplication: Let g be a function with 1 4 g (0) = g and let k be a scalar. Then =g 1 2 =g 3 4 = g (1) = 0, (kg )(0) = k · g (0) = k · 0 = 0, (kg ) (kg ) (kg ) 1 4 1 2 3 4 1 4 1 2 3 4 =k·g =k·g =k·g = k · 0 = 0, = k · 0 = 0, = k · 0 = 0, (kg )(1) = k · g (1) = k · 0 = 0. Thus, kg belongs to the set, and so the set is closed under scalar multiplication. 10. NO. This set is not closed under addition. For example, the function f deﬁned by f (x) = x belongs to the set, and the function g deﬁned by g (x) = if x ≤ 1/2 if x > 1/2 x, 0, belongs to the set. But if x ≤ 1/2 if x > 1/2. 2x, x, (f + g )(x) = 1 Then f + g : [0, 1] → [0, 1], but for 0 < x ≤ 2 , |(f + g )(x)| = 2x > x, so f + g is not in the set. Therefore, the set is not closed under addition. 11. NO. This set is not closed under addition. For example, if we let A= 1 0 0 1 B= and 0 0 1 0 2 1 then A2 = A is symmetric, and B 2 = 02 is symmetric, but (A + B )2 = 1 0 1 1 2 = 1 0 , 332 is not symmetric, so A + B is not in the set. Thus, the set in question is not closed under addition. 12. YES. Let us describe geometrically the set of points equidistant from (−1, 2) and (1, −2). If (x, y ) is such a point, then using the distance formula and equating distances to (−1, 2) and (1, −2), we have (x + 1)2 + (y − 2)2 = (x − 1)2 + (y + 2)2 or (x + 1)2 + (y − 2)2 = (x − 1)2 + (y + 2)2 or x2 + 2x + 1 + y 2 − 4y + 4 = x2 − 2x + 1 + y 2 + 4y + 4. Cancelling like terms and rearranging this equation, we have 4x = 8y . So the points that are equidistant 1 from (−1, 2) and (1, −2) lie on the line through the origin with equation y = 2 x. Any line through the origin 2 2 of R is a subspace of R . Therefore, this line forms a vector space. 13. NO. This set is not closed under addition, nor under scalar multiplication. For instance, the point (5, −3, 4) is a distance 5 from (0, −3, 4), so the point (5, −3, 4) lies in the set. But the point (10, −6, 8) = 2(5, −3, 4) is a distance √ √ (10 − 0)2 + (−6 + 3)2 + (8 − 4)2 = 100 + 9 + 16 = 125 = 5 from (0, −3, 4), so (10, −6, 8) is not in the set. So the set is not closed under scalar multiplication, and hence does not form a subspace. 14. We must check each of the vector space axioms (A1)-(A10). Axiom (A1): Assume that (a1 , a2 ) and (b1 , b2 ) belong to V . Then a2 , b2 > 0. Hence, (a1 , a2 ) + (b1 , b2 ) = (a1 + b1 , a2 b2 ) ∈ V, since a2 b2 > 0. Thus, V is closed under addition. Axiom (A2): Assume that (a1 , a2 ) ∈ V , and let k be a scalar. Note that since a2 > 0, the expression ak > 0 for every k ∈ R. Hence, k (a1 , a2 ) = (ka1 , ak ) ∈ V , thereby showing that V is closed under scalar 2 2 multiplication. Axiom (A3): Let (a1 , a2 ), (b1 , b2 ) ∈ V . We have (a1 , a2 ) + (b1 , b2 ) = (a1 + b1 , a2 b2 ) = (b1 + a1 , b2 a2 ) = (b1 , b2 ) + (a1 , a2 ), as required. Axiom (A4): Let (a1 , a2 ), (b1 , b2 ), (c1 , c2 ) ∈ V . We have ((a1 , a2 ) + (b1 , b2 )) + (c1 , c2 ) = (a1 + b1 , a2 b2 ) + (c1 , c2 ) = ((a1 + b1 ) + c1 , (a2 b2 )c2 ) = (a1 + (b1 + c1 ), a2 (b2 c2 )) = (a1 , a2 ) + (b1 + c1 , b2 c2 ) = (a1 , a2 ) + ((b1 , b2 ) + (c1 , c2 )), 333 as required. Axiom (A5): We claim that (0, 1) is the zero vector in V . To see this, let (b1 , b2 ) ∈ V . Then (0, 1) + (b1 , b2 ) = (0 + b1 , 1 · b2 ) = (b1 , b2 ). Since this holds for every (b1 , b2 ) ∈ V , we conclude that (0, 1) is the zero vector. Axiom (A6): We claim that the additive inverse of a vector (a1 , a2 ) in V is the vector (−a1 , a−1 ) (Note that 2 a−1 > 0 since a2 > 0.) To check this, we compute as follows: 2 (a1 , a2 ) + (−a1 , a−1 ) = (a1 + (−a1 ), a2 a−1 ) = (0, 1). 2 2 Axiom (A7): We have 1 · (a1 , a2 ) = (a1 , a1 ) = (a1 , a2 ) 2 for all (a1 , a2 ) ∈ V . Axiom (A8): Let (a1 , a2 ) ∈ V , and let r and s be scalars. Then we have (rs)(a1 , a2 ) = ((rs)a1 , ars ) 2 = (r(sa1 ), (as )r ) 2 = r(sa1 , as ) 2 = r(s(a1 , a2 )), as required. Axiom (A9): Let (a1 , a2 ) and (b1 , b2 ) be members of V , and let r be a scalar. We have r((a1 , a2 ) + (b1 , b2 )) = r(a1 + b1 , a2 b2 ) = (r(a1 + b1 ), (a2 b2 )r ) = (ra1 + rb1 , ar br ) 22 = (ra1 , ar ) + (rb1 , br ) 2 2 = r(a1 , a2 ) + r(b1 , b2 ), as required. Axiom (A10): Let (a1 , a2 ) ∈ V , and let r and s be scalars. We have (r + s)(a1 , a2 ) = ((r + s)a1 , ar+s ) 2 = (ra1 + sa1 , ar as ) 22 r = (ra1 , a2 ) + (sa1 , as ) 2 = r(a1 , a2 ) + s(a1 , a2 ), as required. 15. We must show that W is closed under addition and closed under scalar multiplication: Closure under Addition: Let (a, 2a ) and (b, 2b ) be elements of W . Now consider the sum of these elements: (a, 2a ) + (b, 2b ) = (a + b, 2a 2b ) = (a + b, 2a+b ) ∈ W, which shows that W is closed under addition. 334 Closure under Scalar Multiplication: Let (a, 2a ) be an element of W , and let k be a scalar. Then k (a, 2a ) = (ka, (2a )k ) = (ka, 2ka ) ∈ W, which shows that W is closed under scalar multiplication. Thus, W is a subspace of V . 16. Note that 3(1, 2) = (3, 23 ) = (3, 8), so the second vector is a multiple of the ﬁrst one under the vector space operations of V from Problem 14. Therefore, {(1, 2), (3, 8)} is linearly dependent. 17. We show that S = {(1, 4), (2, 1)} is linearly independent and spans V . S is linearly independent: Assume that c1 (1, 4) + c2 (2, 1) = (0, 1). This can be written (c1 , 4c1 ) + (2c2 , 1c2 ) = (0, 1) or (c1 + 2c2 , 4c1 ) = (0, 1). In order for 4c1 = 1, we must have c1 = 0. And then in order for c1 + 2c2 = 0, we must have c2 = 0. Therefore, S is linearly independent. S spans V : Consider an arbitrary vector (a1 , a2 ) ∈ V , where a2 > 0. We must ﬁnd constants c1 and c2 such that c1 (1, 4) + c2 (2, 1) = (a1 , a2 ). Thus, (c1 , 4c1 ) + (2c2 , 1c2 ) = (a1 , a2 ) or (c1 + 2c2 , 4c1 ) = (a1 , a2 ). Hence, c1 + 2c2 = a1 and 4c1 = a2 . From the second equation, we conclude that c1 = log4 (a2 ). Thus, from the ﬁrst equation, 1 (a1 − log4 (a2 )). 2 Hence, since we were able to ﬁnd constants c1 and c2 in order that c2 = c1 (1, 4) + c2 (2, 1) = (a1 , a2 ), we conclude that {(1, 4), (2, 1)} spans V . 18. This really hinges on whether or not the given vectors are linearly dependent or linearly independent. If we assume that c1 (2 + x2 ) + c2 (4 − 2x + 3x2 ) + c3 (1 + x) = 0, 335 then (2c1 + 4c2 + c3 ) + (−2c2 + c3 )x + (c1 + 3c2 )x2 = 0. Thus, we have − 2c2 + c3 = 0 2c1 + 4c2 + c3 = 0 c1 + 3c2 = 0. Since the matrix of coeﬃcients 2 41 0 −2 1 1 30 fails to be invertible, we conclude that there will be non-trivial solutions for c1 , c2 , and c3 . Thus, the polynomials are linearly dependent. We can therefore remove a vector from the set without decreasing the span. We remove 1 + x, leaving us with {2 + x2 , 4 − 2x + 3x2 }. Since these polynomials are not proportional, they are now linearly independent, and hence, they are a basis for their span. Hence, dim{2 + x2 , 4 − 2x + 3x2 , 1 + x} = 2. 19. NO. This set is not closed under scalar multiplication. For example, (1, 1) belongs to W , but 2 · (1, 1) = (2, 2) does not belong to W . 20. NO. This set is not closed under scalar multiplication. For example, (1, 1) belongs to W , but 2 · (1, 1) = (2, 2) does not belong to W . 21. NO. The zero vector (zero matrix) is not an orthogonal matrix. Any subspace must contain a zero vector. 22. YES. We show that W is closed under addition and closed under scalar multiplication. Closure under Addition: Assume that f and g belong to W . Thus, f (a) = 2f (b) and g (a) = 2g (b). We must show that f + g belongs to W . We have (f + g )(a) = f (a) + g (a) = 2f (b) + 2g (b) = 2[f (b) + g (b)] = 2(f + g )(b), so f + g ∈ W . So W is closed under addition. Closure under Scalar Multiplication: Assume that f belongs to W and k is a scalar. Thus, f (a) = 2f (b). Moreover, (kf )(a) = kf (a) = k (2f (b)) = 2(kf (b)) = 2(kf )(b), so kf ∈ W . Thus, W is closed under scalar multiplication. 23. YES. We show that W is closed under addition and closed under scalar multiplication. Closure under Addition: Assume that f and g belong to W . Thus, b b f (x)dx = 0 and g (x)dx = 0. a a Hence, b b (f + g )(x)dx = a so f + g ∈ W . b f (x)dx + a g (x)dx = 0 + 0 = 0, a 336 Closure under Scalar Multiplication: Assume that f belongs to W and k is a scalar. Thus, b f (x)dx = 0. a Therefore, b b (kf )(x)dx = a b f (x)dx = k · 0 = 0, kf (x)dx = k a a so kf ∈ W . 24. YES. We show that W is closed under addition and scalar multiplication. Closure under Addition: Assume that b1 a2 d1 , c2 f1 e2 a1 c1 e1 b2 d2 ∈ W. f2 Thus, a1 + b1 = c1 + f1 , a1 − c1 = e1 − f1 − d1 , a2 + b2 = c2 + f2 , a2 − c2 = e2 − f2 − d2 . Hence, (a1 + a2 ) + (b1 + b2 ) = (c1 + c2 ) + (f1 + f2 ) and (a1 + a2 ) − (c1 + c2 ) = (e1 + e2 ) − (f1 + f2 ) − (d1 + d2 ). Thus, a1 c1 e1 b1 a2 d1 + c2 f1 e2 b2 a1 + a2 d2 = c1 + c2 f2 e1 + e2 b1 + b2 d1 + d2 ∈ W, f1 + f2 which means W is closed under addition. Closure under Scalar Multiplication: Assume that ab c d ∈W ef and k is a scalar. Then we have a + b = ka − kc = ke − kf − kd. Therefore, a k c e c + f and a − c = e − f − d. Thus, ka + kb = kc + kf and b ka kb d = kc kd ∈ W, f ke kf so W is closed under scalar multiplication. 25. (a) NO, (b) YES. Since 3 vectors are required to span R3 , S cannot span V . However, since the vectors are not proportional, they are linearly independent. 6 −3 2 1 1 , 26. (a) YES, (b) YES. If we place the three vectors into the columns of a 3 × 3 matrix 1 1 −8 −1 we observe that the matrix is invertible. Hence, its columns are linearly independent. Since we have 3 linearly independent vectors in the 3-dimensional space R3 , we have a basis for R3 . 337 27. (a) NO, (b) YES. Since we have only 3 vectors in a 4-dimensional vector space, they cannot possibly span R4 . To check linear independence, we place the vectors into the columns of a matrix: 61 1 −3 1 −8 2 1 −1 . 00 0 The ﬁrst three rows for the same invertible matrix as in the previous problem, so the reduced row-echelon form 100 0 1 0 is 0 0 1 , so there are no free variables, and hence, the column vectors form a linearly independent 000 set. 28. (a) YES, (b) NO. Since (0, 0, 0) is a member of S , S cannot be linearly independent. However, the set {(10, −6, 5), (3, −3, 2), (6, 4, −1)} is linearly independent (these vectors can be made to form a 3 × 3 matrix that is invertible), and thus, S must span at least a 3-dimensional space. Since dim[R3 ] = 3, we know that S spans R3 . 29. (a) YES, (b) YES. Consider the linear equation c1 (2x − x3 ) + c2 (1 + x + x2 ) + 3c3 + c4 x = 0. Then (c2 + 3c3 ) + (2c1 + c2 + c4 )x + c2 x2 − c1 x3 = 0. From the latter equation, we see that c1 = c2 = 0 (looking at the x2 and x3 coeﬃcients) and thus, c3 = c4 = 0 (looking at the constant term and x coeﬃcient). Thus, c1 = c2 = c3 = c4 = 0, and hence, S is linearly independent. Since we have four linearly independent vectors in the 4-dimensional vector space P3 , we conclude that these vectors also span P3 . 30. (a) NO, (b) NO. The set S only contains four vectors, although dim[P4 ] = 5, so it is impossible for S to span P4 . Alternatively, simply note that none of the polynomials in S contain an x3 term. To check linear independence, consider the equation c1 (x4 + x + 2 + 1) + c2 (x2 + x + 1) + c3 (x + 1) + c4 (x4 + 2x + 3) = 0. Rearranging this, we have (c1 + c4 )x4 + (c1 + c2 )x2 + (c2 + c3 + 2c4 )x + (c1 + c2 + 3c4 ) = 0, and so we look to solve the linear system with augmented matrix 11130 1 1 1 30 0 1 1 2 0 0 1 1 2 0 ∼ = 1 1 0 0 0 0 0 −1 −3 0 10010 0 −1 −1 −2 0 11130 0 1 1 2 0 , 0 0 1 3 0 00000 where we have added −1 times the ﬁrst row to each of the third and fourth rows in the ﬁrst step, and zeroed at the last row in the second step. We see that c4 is a free variable, which means that a nontrivial solution to the linear system exists, and therefore, the original vectors are linearly dependent. 338 31. (a) NO, (b) YES. The vector space M2×3 (R) is 6-dimensional, and since only four vectors belong to S , S cannot possibly span M2×3 (R). On the other hand, if we form the linear system with augmented matrix −1 3 −1 −11 0 0 2 −2 −6 0 0 1 −3 −5 0 , 01 3 1 0 12 2 −2 0 13 1 −5 0 row reduction shows that each column of a row-echelon form contains a pivot, and therefore, the vectors are linearly independent. 32. (a) NO, (b) NO. Since M2 (R) is only 4-dimensional and S contains 5 vectors, S cannot possibly be linearly independent. Moreover, each matrix in S is symmetric, and therefore, only symmetric matrices can be found in the span(S ). Thus, S fails to span M2 (R). 33. Assume that {v1 , v2 , v3 } is linearly independent, and that v4 does not lie in span{v1 , v2 , v3 }. We will show that {v1 , v2 , v3 , v4 } is linearly independent. To do this, assume that c1 v1 + c2 v2 + c3 v3 + c4 v4 = 0. We must prove that c1 = c2 = c3 = c4 = 0. If c4 = 0, then we rearrange the above equation to show that v4 = − c1 c2 c3 v1 − v2 − v3 , c4 c4 c4 which implies that v4 ∈ span{v1 , v2 , v3 }, contrary to our assumption. Therefore, we know that c4 = 0. Hence the equation above reduces to c1 v1 + c2 v2 + c3 v3 = 0, and the linear independence of {v1 , v2 , v3 } now implies that c1 = c2 = c3 = 0. Therefore, c1 = c2 = c3 = c4 = 0, as required. 34. Note that v and w are column vectors in Rm . We have v · w = vT w. Since w ∈ nullspace(AT ), we have AT w = 0, and since v ∈ colspace(A), we can write v = Av1 for some v1 ∈ Rm . Therefore, T T T v · w = vT w = (Av1 )T w = (v1 AT )w = v1 (AT w) = v1 (0) = 0, as desired. 35. (a): Our proof here actually shows that the set of n × n skew-symmetric matrices forms a subspace of Mn (R) for all positive integers n. We show that W is closed under addition and scalar multiplication: Closure under Addition: Suppose that A and B are in W . This means that AT = −A and B T = −B . Then (A + B )T = AT + B T = (−A) + (−B ) = −(A + B ), so A + B is skew-symmetric. Therefore, A + B belongs to W , and W is closed under addition. Closure under Scalar Multiplication: Suppose that A is in W and k is a scalar. We know that AT = −A. Then (kA)T = k (AT ) = k (−A) = −(kA), 339 so kA is skew-symmetric. Therefore, kA belongs to W , and W is closed under scalar multiplication. Therefore, W is a subspace. (b): An arbitrary 3 × 3 skew-symmetric matrix takes the form 0 ab 010 00 −a 0 c = a −1 0 0 + b 0 0 −b −c 0 000 −1 0 1 0 00 0 + c 0 0 1 . 0 0 −1 0 This results in 03 if and only if a = b = c = 0, so the three matrices appearing on the right-hand side are linearly independent. Moreover, the equation above also demonstrates that W is spanned by the three matrices appearing on the right-hand side. Therefore, these matrices are a basis for W : 010 001 0 00 0 1 . Basis = −1 0 0 , 0 0 0 , 0 000 −1 0 0 0 −1 0 Hence, dim[W ] = 3. (c): Since dim[M3 (R)] = 9, we must add an additional six (linearly independent) 3×3 matrices to form a basis for M3 (R). Using the notation prior to Example 4.6.3, we can use the matrices E11 , E22 , E33 , E12 , E13 , E23 to extend the basis in part (b) to a basis for M3 (R): 0 Basis for M3 (R) = −1 0 1 0 0 0 0 0 , 0 0 −1 0 0 0 1 0 00 0 , 0 0 1 , E11 , E22 , E33 , E12 , E13 , E23 0 0 −1 0 36. (a): We show that W is closed under addition and closed under scalar multiplication. Arbitrary elements of W have the form a b −a − b . c d −c − d −a − c −b − d a + b + c + d Closure under Addition: Let a1 b1 c1 d1 X1 = −a1 − c1 −b1 − d1 be elements of W . Then −a1 − b1 −c1 − d1 a1 + b1 + c1 + d1 a1 + a2 c1 + c2 X1 + X2 = −a1 − c1 − a2 − c2 a2 c2 X2 = −a2 − c2 and b 1 + b2 d1 + d2 −b1 − d1 − b2 − d2 b2 d2 −b2 − d2 −a2 − b2 −c2 − d2 a2 + b2 + c2 + d2 −a1 − b1 − a2 − b2 , −c1 − d1 − c2 − d2 a1 + b1 + c1 + d1 + a2 + b2 + c2 + d2 which is immediately seen to have row sums and column sums of zero. Thus, X1 + X2 belongs to W , and W is closed under addition. Closure under Scalar Multiplication: Let a1 c1 X1 = −a1 − c1 b1 d1 −b1 − d1 −a1 − b1 , −c1 − d1 a1 + b1 + c1 + d1 340 and let k be a scalar. Then ka1 kb1 k (−a1 − b1 ) , kc1 kd1 k (−c1 − d1 ) kX1 = k (−a1 − c1 ) k (−b1 − d1 ) k (a1 + b1 + c1 + d1 ) and we see by inspection that all row and column sums of kX1 are zero, so kX1 belongs to W . Therefore, W is closed under scalar multiplication. Therefore, W is a subspace of M3 (R). (b): We may write a b −a − b 1 = a 0 c d −c − d −a − c −b − d a + b + c + d −1 0 −1 0 1 −1 0 0 0 +b 0 0 0 +c 1 0 1 0 −1 1 −1 0 0 0 0 0 0 −1 +d 0 1 −1 . 0 1 0 −1 1 This results in 03 if and only if a = b = c = d = 0, so the matrices appearing on the right-hand side are linearly independent. Moreover, the equation above also demonstrates that W is spanned by the four matrices appearing on the right-hand side. Therefore, these matrices are a basis for W : 0 0 0 00 0 0 1 −1 1 0 −1 1 −1 . 0 0 , 1 0 −1 , 0 0 , 0 Basis = 0 0 −1 0 1 0 −1 1 −1 0 1 0 −1 1 Hence, dim[W ] = 4. (c): Since dim[M3 (R)] = 9, we must add an additional ﬁve (linearly independent) 3 × 3 matrices to form a basis for M3 (R). Using the notation prior to Example 4.6.3, we can use the matrices E11 , E12 , E13 , E21 , and E31 to extend the basis from part (b) to a basis for M3 (R): Basis for M3 (R) = 1 0 −1 00 0 0 0 0 0 −1 0 1 −1 0 0 , 0 0 0 , 1 0 −1 , 0 1 −1 , E11 , E12 , E13 , E21 , E31 . 0 1 0 −1 1 −1 0 1 0 −1 1 37. (a): We must verify the axioms (A1)-(A10) for a vector space: Axiom (A1): Assume that (v1 , w1 ) and (v2 , w2 ) belong to V ⊕ W . Since v1 +V v2 ∈ V and w1 +W w2 ∈ W , then the sum (v1 , w1 ) + (v2 , w2 ) = (v1 +V v2 , w1 +W w2 ) lies in V ⊕ W . Axiom (A2): Assume that (v, w) belongs to V ⊕ W , and let k be a scalar. Since k ·V v ∈ V and k ·W w ∈ W , the scalar multiplication k · (v, w) = (k ·V v, k ·W w) lies in V ⊕ W . For the remainder of the axioms, we will omit the ·V and ·W notations. They are to be understood. 341 Axiom (A3): Assume that (v1 , w1 ), (v2 , w2 ) ∈ V ⊕ W . Then (v1 , w1 ) + (v2 , w2 ) = (v1 + v2 , w1 + w2 ) = (v2 + v1 , w2 + w1 ) = (v2 , w2 ) + (v1 , w1 ), as required. Axiom (A4): Assume that (v1 , w1 ), (v2 , w2 ), (v3 , w3 ) ∈ V ⊕ W . Then ((v1 , w1 ) + (v2 , w2 )) + (v3 , w3 ) = (v1 + v2 , w1 + w2 ) + (v3 , w3 ) = ((v1 + v2 ) + v3 , (w1 + w2 ) + w3 ) = (v1 + (v2 + v3 ), w1 + (w2 + w3 )) = (v1 , w1 ) + (v2 + v3 , w2 + w3 ) = (v1 , w1 ) + ((v2 , w2 ) + (v3 , w3 )), as required. Axiom (A5): We claim that the zero vector in V ⊕ W is (0V , 0W ), where 0V is the zero vector in the vector space V and 0W is the zero vector in the vector space W . To check this, let (v, w) ∈ V ⊕ W . Then (0V , 0W ) + (v, w) = (0V + v, 0W + w) = (v, w), which conﬁrms that (0V , 0W ) is the zero vector for V ⊕ W . Axiom (A6): We claim that the additive inverse of the vector (v, w) ∈ V ⊕ W is the vector (−v, −w), where −v is the additive inverse of v in the vector space V and −w is the additive inverse of w in the vector space W . We check this: (v, w) + (−v, −w) = (v + (−v ), w + (−w)) = (0V , 0W ), as required. Axiom (A7): For every vector (v, w) ∈ V ⊕ W , we have 1 · (v, w) = (1 · v, 1 · w) = (v, w), where in the last step we have used the fact that Axiom (A7) holds in each of the vector spaces V and W . Axiom (A8): Let (v, w) be a vector in V ⊕ W , and let r and s be scalars. Using the fact that Axiom (A8) holds in V and W , we have (rs)(v, w) = ((rs)v, (rs)w) = (r(sv ), r(sw)) = r(sv, sw) = r(s(v, w)). Axiom (A9): Let (v1 , w1 ) and (v2 , w2 ) be vectors in V ⊕ W , and let r be a scalar. Then r((v1 , w1 ) + (v2 , w2 )) = r(v1 + v2 , w1 + w2 ) = (r(v1 + v2 ), r(w1 + w2 )) = (rv1 + rv2 , rw1 + rw2 ) = (rv1 , rw1 ) + (rv2 , rw2 ) = r(v1 , w1 ) + r(v2 , w2 )), as required. 342 Axiom (A10): Let (v, w) ∈ V ⊕ W , and let r and s be scalars. Then (r + s)(v, w) = ((r + s)v, (r + s)w) = (rv + sv, rw + sw) = (rv, rw) + (sv, sw) = r(v, w) + s(v, w), as required. (b): We show that {(v, 0) : v ∈ V } is a subspace of V ⊕ W , by checking closure under addition and closure under scalar multiplication: Closure under Addition: Suppose (v1 , 0) and (v2 , 0) belong to {(v, 0) : v ∈ V }, where v1 , v2 ∈ V . Then (v1 , 0) + (v2 , 0) = (v1 + v2 , 0) ∈ {(v, 0) : v ∈ V }, which shows that the set is closed under addition. Closure under Scalar Multiplication: Suppose (v, 0) ∈ {(v, 0) : v ∈ V } and k is a scalar. Then k (v, 0) = (kv, 0) is again in the set. Thus, {(v, 0) : v ∈ V } is closed under scalar multiplication. Therefore, {(v, 0) : v ∈ V } is a subspace of V ⊕ W . (c): Let {v1 , v2 , . . . , vn } be a basis for V , and let {w1 , w2 , . . . , wm } be a basis for W . We claim that S = {(vi , 0) : 1 ≤ i ≤ n} ∪ {(0, wj ) : 1 ≤ j ≤ m} is a basis for V ⊕ W . To show this, we will verify that S is a linearly independent set that spans V ⊕ W : Check that S is linearly independent: Assume that c1 (v1 , 0) + c2 (v2 , 0) + · · · + cn (vn , 0) + d1 (0, w1 ) + d2 (0, w2 ) + · · · + dm (0, wm ) = (0, 0). We must show that c1 = c2 = · · · = cn = d1 = d2 = · · · = dm = 0. Adding the vectors on the left-hand side, we have (c1 v1 + c2 v2 + · · · + cn vn , d1 w1 + d2 w2 + · · · + dm wm ) = (0, 0), so that c1 v1 + c2 v2 + · · · + cn vn = 0 and d1 w1 + d2 w2 + · · · + dm wm = 0. Since {v1 , v2 , . . . , vn } is linearly independent, c1 = c2 = · · · = cn = 0, and since {w1 , w2 , . . . , wm } is linearly independent, d1 = d2 = · · · = dm = 0. Thus, S is linearly independent. Check that S spans V ⊕ W : Let (v, w) ∈ V ⊕ W . We must express (v, w) as a linear combination of the vectors in S . Since {v1 , v2 , . . . , vn } spans V , there exist scalars c1 , c2 , . . . , cn such that v = c1 v1 + c2 v2 + · · · + cn vn , 343 and since {w1 , w2 , . . . , wm } spans W , there exist scalars d1 , d2 , . . . , dm such that w = d1 w1 + d2 w2 + · · · + dm wm . Then (v, w) = c1 (v1 , 0) + c2 (v2 , 0) + · · · + cn (vn , 0) + d1 (0, w1 ) + d2 (0, w2 ) + · · · + dm (0, wm ). Therefore, (v, w) is a linear combination of vectors in S , so S spans V ⊕ W . Therefore, S is a basis for V ⊕ W . Since S contains n + m vectors, dim[V ⊕ W ] = n + m. 38. There are many examples here. One such example is S = {x3 , x3 − x2 , x3 − x, x3 − 1}, a basis for P3 whose vectors all have degree 3. To see that S is a basis, note that S is linearly independent, since if c1 x3 + c2 (x3 − x2 ) + c3 (x3 − x) + c4 (x3 − 1) = 0, then (c1 + c2 + c3 + c4 )x3 − c2 x2 − c3 x − c4 = 0, and so c1 = c2 = c3 = c4 = 0. Since S is a linearly independent set of 4 vectors and dim[P3 ] = 4, S is a basis for P3 . 39. Let A be an m × n matrix. By the Rank-Nullity Theorem, dim[colspace(A)] + dim[nullspace(A)] = n. Since, by assumption, colspace(A) = nullspace(A) = r, n = 2r must be even. 40. The ith row of the matrix A is bi (c1 c2 . . . cn ). Therefore, each row of A is a multiple of the ﬁrst row, and so rank(A) = 1. Thus, by the Rank-Nullity Theorem, nullity(A) = n − 1. 41. A row-echelon form of A is given by 1 0 2 0 a basis for the columnspace of A is given by −2 1 . Thus, a basis for the rowspace of A is given by {(1, 2)}, −3 −6 , and a basis for the nullspace of A is given by . All three subspaces are one-dimensional. −2 0 1/3 5/21 . Thus, a basis for the rowspace of A is 0 0 6 −1 15 {(1, −6, −2, 0), (0, 1, 3 , 21 )}, and a basis for the column space of A is given by 3 , 3 . For 7 21 the nullspace, we observe that the equations corresponding to the row-echelon form of A can be written as 1 −6 1 42. A row-echelon form of A is given by 0 0 0 x − 6y − 2z = 0 1 Set w = t and z = s. Then y = − 3 s − nullspace(A) = − 5 21 t and 1 5 y + z + w = 0. 3 21 and x = − 10 t. Thus, 7 10 1 5 t, − s − t, s, t 7 3 21 : s, t ∈ R = t− 10 5 1 , − , 0, 1 + s 0, − , 1, 0 7 21 3 5 Hence, a basis for the nullspace of A is given by {(− 10 , − 21 , 0, 1), (0, − 1 , 1, 0)}. 7 3 . 344 1 0 43. A row-echelon form of A is given by 0 0 {(1, −2.5, −5), (0, 1, 1.3), (0, 0, 1)}, and a basis −4 0 6 −2 −2.5 −5 1 1.3 . Thus, a basis for the rowspace of A is given by 0 1 0 0 for the columnspace of A is given by 0 3 10 13 , , . 5 2 5 10 Moreover, we have that the nullspace of A is 0-dimensional, and so the basis is empty. 10 2 2 1 0 1 −1 −4 −3 . Thus, a basis for the rowspace of A is 44. A row-echelon form of A is given by 0 0 1 4 3 00 0 1 0 given by {(1, 0, 2, 2, 1), (0, 1, −1, −4, −3), (0, 0, 1, 4, 3), (0, 0, 0, 1, 0)}, a basis for the columnspace of A is given by 3 1 , 1 −2 5 5 0 2 , 1 1 −4 0 2 2 . , −2 −2 For the nullspace, if the variables corresponding the columns are (x, y, z, u, v ), then the row-echelon form tells us that v = t is a free variable, u = 0, z = −3t, y = 0, and x = 5t. Thus, nullspace(A) = {(5t, 0, −3t, 0, t) : t ∈ R} = {t(5, 0, −3, 0, 1) : t ∈ R}, and so a basis for the nullspace of A is {(5, 0, −3, 0, 1)}. 45. We will obtain bases for the rowspace, columnspace, and nullspace and orthonormalize them. A row 126 0 1 2 echelon form of A is given by 0 0 0 . We see that a basis for the rowspace of A is given by 000 {(1, 2, 6), (0, 1, 2)}. We apply Gram-Schmidt to this set, and thus we need to replace (0, 1, 2) by (0, 1, 2) − 14 (1, 2, 6) = 41 − 14 13 2 , ,− 41 41 41 So an orthogonal basis for the rowspace of A is given by (1, 2, 6) , − 14 13 2 , ,− 41 41 41 . . 345 To replace this with an orthonormal basis, we must normalize each vector. The ﬁrst one has norm the second one has norm √3 . Hence, an orthonormal basis for the rowspace of A is 41 √ √ 41 41 (1, 2, 6), 41 3 − 14 13 2 , ,− 41 41 41 √ 41 and . Returning to the row-echelon form of A obtained above, we see that a basis for the columnspace of A is 2 1 21 , . 1 0 0 1 2 1 We apply Gram-Schmidt to this set, and thus we need to replace by 1 0 2 1 4 1 4 2 1 −1 − = 1 6 0 3 3 . 0 1 −2 So an orthogonal basis for the columnspace of A is given by 1 4 2 1 −1 , . 0 3 3 1 −2 √ √ The norms of these vectors are, respectively, 6 and 30/3. Hence, we normalize the above orthogonal basis to obtain the orthonormal basis for the columnspace: 1 4 √ √ 6 2 , 30 −1 . 6 0 30 3 1 −2 Returning once more to the row-echelon form of A obtained above, we see that, in order to ﬁnd the nullspace of A, we must solve the equations x + 2y + 6z = 0 and y + 2z = 0. Setting z = t as a free variable, we ﬁnd that y = −2t and x = −2t. Thus, a basis for the nullspace of A is {(−2, −2, 1)}, which can be normalized to 2 21 − ,− , . 3 33 13 5 0 1 3/2 46. A row-echelon form for A is 0 0 1 . Note that since rank(A) = 3, nullity(A) = 0, and so 0 0 0 00 0 there is no basis for nullspace(A). Moreover, rowspace(A) is a 3-dimensional subspace of R3 , and therefore, rowspace(A) = R3 . An orthonormal basis for this is {(1, 0, 0), (0, 1, 0), (0, 0, 1)}. 346 Finally, consider the columnspace of A. We must apply the Gram-Schmidt process to the three columns of A. Thus, we replace the second column vector by −1 1 3 −1 1 −3 v2 = 2 − 4 0 = 2 . 1 1 5 1 1 5 Next, we replace the third column vector by 5 3 1 −1 1 −1 1 3 7 3 v3 = 3 − 0 − 2 = 0 2 2 2 1 1 −3 8 3 1 1 Hence, an orthogonal basis for the columnspace of A is 1 −1 −1 1 0 , 2 1 1 1 1 , Normalizing each vector yields the orthonormal basis 1 −1 1 −1 1 1 0 , √ 2 2 8 1 1 1 1 3 3 0 −3 3 1 , 6 . . 3 3 0 −3 3 . 47. Let x1 = (5, −1, 2) and let x2 = (7, 1, 1). Using the Gram-Schmidt process, we have v1 = x1 = (5, −1, 2) and v2 = x2 − x2 , v1 36 6 12 11 7 v = (7, 1, 1) − (5, −1, 2) = (7, 1, 1) − (6, − , ) = (1, , − ). 21 | v1 | 30 55 5 5 Hence, an orthogonal basis is given by {(5, −1, 2), (1, 11 7 , − )}. 5 5 48. We already saw in Problem 26 that S spans R3 , so therefore an obvious orthogonal basis for span(S ) is {(1, 0, 0), (0, 1, 0), (0, 0, 1)}. Alternatively, for practice with Gram-Schmidt, we would proceed as follows: Let x1 = (6, −3, 2), x2 = (1, 1, 1), and x3 = (1, −8, −1). Using the Gram-Schmidt process, we have v1 = x1 = (6, −3, 2), 347 v2 = x2 − x2 , v1 5 v1 = (1, 1, 1) − (6, −3, 2) = 2 v1 49 19 64 39 ,, 49 49 49 , and v3 = x3 − x3 , v2 4 38 x3 , v1 v1 − v2 = (1, −8, −1) − (6, −3, 2) + (19, 64, 39) = v1 2 v2 2 7 427 − 45 36 81 ,− , 61 61 61 . Hence, an orthogonal basis is given by (6, −3, 2), 19 64 39 ,, 49 49 49 ,− 45 36 81 ,− , 61 61 61 . 49. We already saw in Problem 29 that S spans P3 , so therefore we can apply Gram-Schmidt to the basis {1, x, x2 , x3 } for P3 , instead of the given set of polynomials. Let x1 = 1, x2 = x, x3 = x2 , and x4 = x3 . Using the Gram-Schmidt process, we have v1 = x1 = 1, v2 = x2 − v3 = x3 − 1 x2 , v1 v1 = x − , 2 v1 2 x3 , v1 x3 , v2 1 1 1 v1 − v2 = x2 − (x − ) − = x2 − x + , v1 2 v2 2 2 3 6 and v4 = x4 − x4 , v2 x4 , v3 1 9 1 3 1 3 3 1 x4 , v1 v1 − v2 − v3 = x3 − − (x − ) − (x2 − x + ) = x3 − x2 + x − . 2 2 2 v1 v2 v3 4 10 2 2 6 2 5 20 Hence, an orthogonal basis is given by 1 1 3 3 1 1, x − , x2 − x + , x3 − x2 + x − 2 6 2 5 20 . 50. It is easy to see that the span of the set of vectors in Problem 32 is the set of all 2 × 2 symmetric matrices. Therefore, we can simply give the orthogonal basis 1 0 0 0 , 0 1 1 0 , 0 0 0 1 for the set of all 2 × 2 symmetric matrices. 51. We have u, v = (2, 3), (4, −1) = 2 · 4 + 3 · (−1) = 5, u = and so θ = cos−1 u, v uv = cos−1 √ 5 √ 13 17 √ 13, v = √ 17, ≈ 1.23 radians. 52. We have u, v = (−2, −1, 2, 4), (−3, 5, 1, 1) = (−2) · (−3) + (−1) · 5 + 2 · 1 + 4 · 1 = 7, u = 5, v = 6, and so θ = cos−1 u, v uv = cos−1 7 5·6 = cos−1 (7/30) ≈ 1.34 radians. 348 53. For Problem 51, we have u, v = (2, 3), (4, −1) = 2 · 2 · 4 + 3 · (−1) = 13, u = √ 17, v = √ 33, and so θ = cos−1 u, v uv = cos−1 √ 13 √ 17 33 ≈ 0.99 radians. For Problem 52, we have u, v = (−2, −1, 2, 4), (−3, 5, 1, 1) = 2 · (−2) · (−3) + (−1) · 5 + 2 · 1 + 4 · 1 = 13, u = √ 29, v = √ 45, and so θ = cos−1 u, v uv = cos−1 √ 13 √ 29 · 45 ≈ 1.20 radians. 54. (a): We must verify the four axioms for an inner product given in Deﬁnition 4.11.3. Axiom 1: We have p · p = p(t0 )p(t0 ) + p(t1 )p(t1 ) + · · · + p(tn )p(tn ) = p(t0 )2 + p(t1 )2 + · · · + p(tn )2 ≥ 0. Moreover, p(t0 )2 + p(t1 )2 + · · · + p(tn )2 = 0 ⇐⇒ p(t0 ) = p(t1 ) = · · · = p(tn ) = 0. But the only polynomial of degree ≤ n which has more than n roots is the zero polynomial. Thus, p · p = 0 ⇐⇒ p = 0. Axiom 2: We have p · q = p(t0 )q (t0 ) + p(t1 )q (t1 ) + · · · + p(tn )q (tn ) = q (t0 )p(t0 ) + q (t1 )p(t1 ) + · · · + q (tn )p(tn ) = q · p for all p, q ∈ Pn . Axiom 3: Let k be a scalar, and let p, q ∈ Pn . Then (kp) · q = (kp)(t0 )q (t0 ) + (kp)(t1 )q (t1 ) + · · · + (kp)(tn )q (tn ) = kp(t0 )q (t0 ) + kp(t1 )q (t1 ) + · · · + kp(tn )q (tn ) = k [p(t0 )q (t0 ) + p(t1 )q (t1 ) + · · · + p(tn )q (tn )] = k [p · q ], as required. Axiom 4: Let p1 , p2 , q ∈ Pn . Then we have (p1 + p2 ) · q = (p1 + p2 )(t0 )q (t0 ) + (p1 + p2 )(t1 )q (t1 ) + · · · + (p1 + p2 )(tn )q (tn ) = [p1 (t0 ) + p2 (t0 )]q (t0 ) + [p1 (t1 ) + p2 (t1 )]q (t1 ) + · · · + [p1 (tn ) + p2 (tn )]q (tn ) = [p1 (t0 )q (t0 ) + p1 (t1 )q (t1 ) + · · · + p1 (tn )q (tn )] + [p2 (t0 )q (t0 ) + p2 (t1 )q (t1 ) + · · · + p2 (tn )q (tn )] = (p1 · q ) + (p2 · q ), 349 as required. (b): The projection of p2 onto span{p0 , p1 } is given by p2 , p 0 p2 , p 1 20 0 p+ p= p0 + p1 = 5. 20 21 p0 p1 4 11 (c): We take q = t2 − 5. 55. Let x = (2, 3, 4) and let v = (6, −1, −4). We must ﬁnd the length of x − P(x, v) = (2, 3, 4) − 7 (2, 3, 4), (6, −1, −4) (6, −1, −4) = (2, 3, 4) + (6, −1, −4) = (6, −1, −4) 2 53 which is x − P(x, v) = 148 152 184 , , 53 53 53 148 152 184 , , 53 53 53 , ≈ 5.30. 56. Note that x − y, vi = x, vi − y, vi = 0, by assumption. Let v be an arbitrary vector in V , and write v = a1 v1 + a2 v2 + · · · + an vn , for some scalars a1 , a2 , . . . , an . Observe that x − y, v = x − y, a1 v1 + a2 v2 + · · · + an vn = a1 x − y, v1 + a2 x − y, v2 + · · · + an x − y, vn = a1 · 0 + a2 · 0 + · · · + an · 0 = 0. Thus, x − y is orthogonal to every vector in V . In particular x − y, x − y = 0, and hence, x − y = 0. Therefore x = y. 57. Any of the conditions (a)-(p) appearing in the Invertible Matrix Theorem would be appropriate at this point in the text. Solutions to Section 5.1 True-False Review: 1. FALSE. The conditions T (u + v) = T (u) + T (v) and T (c · v) = c · T (v) must hold for all vectors u, v in V and for all scalars c, not just “for some”. 2. FALSE. The dimensions of the matrix A should be m × n, not n × m, as stated in the question. 3. FALSE. This will only necessarily hold for a linear transformation, not for more general mappings. 4. TRUE. This is precisely the deﬁnition of the matrix associated with a linear transformation, as given in the text. 5. TRUE. Since 0 = T (0) = T (v + (−v)) = T (v) + T (−v), 350 we conclude that T (−v) = −T (v). 6. TRUE. Using the properties of a linear transformation, we have T ((c + d)v) = T (cv + dv) = T (cv) + T (dv) = cT (v) + dT (v), as stated. Problems: 1. Let (x1 , x2 ), (y1 , y2 ) ∈ R2 and c ∈ R. T ((x1 , x2 ) + (y1 , y2 )) = T (x1 + y1 , x2 + y2 ) = (x1 + y1 + 2x2 + 2y2 , 2x1 + 2y1 − x2 − y2 ) = (x1 + 2x2 , 2x1 − x2 ) + (y1 + 2y2 , 2y1 − y2 ) = T (x1 , x2 ) + T (y1 , y2 ). T (c(x1 , x2 )) = T (cx1 , cx2 ) = (cx1 + 2cx2 , 2cx1 − cx2 ) = c(x1 + 2x2 , 2x1 − x2 ) = cT (x1 , x2 ). Thus, T is a linear transformation. 2. Let (x1 , x2 , x3 ), (y1 , y2 , y3 ) ∈ R3 and c ∈ R. T ((x1 , x2 , x3 ) + (y1 , y2 , y3 )) = T (x1 + y1 , x2 + y2 , x3 + y3 ) = (x1 + y1 + 3x2 + 3y2 + x3 + y3 , x1 + y1 − x2 − y2 ) = (x1 + 3x2 + x3 , x1 − x2 ) + (y1 + 3y2 + y3 , y1 − y2 ) = T (x1 , x2 , x3 ) + T (y1 , y2 , y3 ). T (c(x1 , x2 , x3 )) = T (cx1 , cx2 , cx3 ) = (cx1 +3cx2 + cx3 , cx1 − cx2 ) = c(x1 +3x2 + x3 , x1 − x2 ) = cT (x1 , x2 , x3 ). Thus, T is a linear transformation. 3. Let y1 , y2 ∈ C 2 (I ) and c ∈ R. Then, T (y1 + y2 ) = (y1 + y2 ) − 16(y1 + y2 ) = (y1 − 16y1 ) + (y2 − 16y2 ) = T (y1 ) + T (y2 ). T (cy1 ) = (cy1 ) − 16(cy1 ) = c(y1 − 16y1 ) = cT (y1 ). Consequently, T is a linear transformation. 4. Let y1 , y2 ∈ C 2 (I ) and c ∈ R. Then, T (y1 + y2 ) = (y1 + y2 ) + a1 (y1 + y2 ) + a2 (y1 + y2 ) = (y1 + a1 y1 + a2 y1 ) + (y2 + a1 y2 + a2 y2 ) = T (y1 ) + T (y2 ). T (cy1 ) = (cy1 ) + a1 (cy1 ) + a2 (cy1 ) = c(y1 + a1 y1 + a2 y1 ) = cT (y1 ). Consequently, T is a linear transformation. b 5. Let f, g ∈ V and c ∈ R. Then T (f + g ) = b (f + g )(x)dx = a b [f (x)+ g (x)]dx = a b f (x)dx + a T (f ) + T (g ). b T (cf ) = b [cf (x)]dx = c a f (x)dx = cT (f ). Therefore, T is a linear transformation. a g (x)dx = a 351 6. Let A1 , A2 , B ∈ Mn (R) and c ∈ R. T (A1 + A2 ) = (A1 + A2 )B − B (A1 + A2 ) = A1 B + A2 B − BA1 − BA2 = (A1 B − BA1 ) + (A2 B − BA2 ) = T (A1 ) + T (A2 ). T (cA1 ) = (cA1 B ) − B (cA1 ) = c(A1 B − BA1 ) = cT (A1 ). Consequently, T is a linear transformation. 7. Let A, B ∈ Mn (R) and c ∈ R. Then, S (A + B ) = (A + B ) + (A + B )T = A + AT + B + B T = S (A) + S (B ). S (cA) = (cA) + (cA)T = c(A + AT ) = cS (A). Consequently, S is a linear transformation. 8. Let A, B ∈ Mn (R) and c ∈ R. Then, n T (A + B ) = tr(A + B ) = k=1 n T (cA) = tr(cA) = n (akk + bkk ) = k=1 n cakk = c k=1 n akk + bkk = tr(A) + tr(B ) = T (A) + T (B ). k=1 akk = ctr(A) = cT (A). k=1 Consequently, T is a linear transformation. 9. Let x = (x1 , x2 ), y = (y1 , y2 ) be in R2 . Then T (x + y) = T (x1 + y1 , x2 + y2 ) = (x1 + x2 + y1 + y2 , 2), whereas T (x) + T (y) = (x1 + x2 , 2) + (y1 + y2 , 2) = (x1 + x2 + y1 + y2 , 4). We see that T (x + y) = T (x) + T (y), hence T is not a linear transformation. 10. Let A ∈ M2 (R) and c ∈ R. Then T (cA) = det(cA) = c2 det(A) = c2 T (A). Since T (cA) = cT (A) in general, it follows that T is not a linear transformation. 3 −2 . 1 5 1 3 12. If T (x1 , x2 ) = (x1 + 3x2 , 2x1 − 7x2 , x1 ), then A = [T (e1 ), T (e2 )] = 2 −7 . 1 0 11. If T (x1 , x2 ) = (3x1 − 2x2 , x1 + 5x2 ), then A = [T (e1 ), T (e2 )] = 13. If T (x1 , x2 , x3 ) = (x1 − x2 + x3 , x3 − x1 ), then A = [T (e1 ), T (e2 ), T (e3 )] = 14. If T (x1 , x2 , x3 ) = x1 + 5x2 − 3x3 , then A = [T (e1 ), T (e2 ), T (e3 )] = 1 1 −1 1 −1 01 5 −3 . −1 −1 15. If T (x1 , x2 , x3 ) = (x3 − x1 , −x1 , 3x1 + 2x3 , 0), then A = [T (e1 ), T (e2 ), T (e3 )] = 3 0 16. T (x) = Ax = 1 −4 3 7 x1 x2 = x1 + 3x2 −4x1 + 7x2 , which we write as T (x1 , x2 ) = (x1 + 3x2 , −4x1 + 7x2 ). 0 0 0 0 . 1 0 . 2 0 352 2 −1 5 3 1 −2 17. T (x) = Ax = x1 x2 = x3 2x1 − x2 + 5x3 3x1 + x2 − 2x3 , which we write as T (x1 , x2 , x3 ) = (2x1 − x2 + 5x3 , 3x1 + x2 − 2x3 ). 2 2 −3 x1 2x1 + 2x2 − 3x3 2 x2 = 4x1 − x2 + 2x3 , which we write as 18. T (x) = Ax = 4 −1 5 7 −8 x3 5x1 + 7x2 − 8x3 T (x1 , x2 , x3 ) = (2x1 + 2x2 − 3x3 , 4x1 − x2 + 2x3 , 5x1 + 7x2 − 8x3 ). −3 −3x −2 −2x , which we write as 19. T (x) = Ax = 0 [x] = 0 1 x T (x) = (−3x, −2x, 0, x). 20. T (x) = Ax = 1 −4 −6 0 2 x1 x2 x3 x4 x5 = [x1 − 4x2 − 6x3 + 2x5 ], which we write as T (x1 , x2 , x3 , x4 , x5 ) = x1 − 4x2 − 6x3 + 2x5 . 21. Let u be a ﬁxed vector in V , v1 , v2 ∈ V , and c ∈ R. Then T (v1 + v2 ) = u, v1 + v2 = u, v1 + u, v2 = T (v1 ) + T (v2 ). T (cv1 ) = u, cv1 = c u, v1 = cT (v1 ). Thus, T is a linear transformation. 22. We must show that the linear transformation T respects addition and scalar multiplication: T respects addition: Let v1 and v2 be vectors in V . Then we have T (v1 + v2 ) = ( u1 , v1 + v2 , u2 , v1 + v2 ) = ( v1 + v2 , u1 , v1 + v2 , u2 ) = ( v1 , u1 + v2 , u1 , v1 , u2 + v2 , u2 ) = ( v1 , u1 , v1 , u2 ) + ( v2 , u1 , v2 , u2 ) = ( u1 , v1 , u2 , v1 ) + ( u1 , v2 , u2 , v2 ) = T (v1 ) + T (v2 ). T respects scalar multiplication: Let v be a vector in V and let c be a scalar. Then we have T (cv) = ( u1 , cv = ( cv, u1 = (c v, u1 = c( v, u1 = c( u1 , v = cT (v). , u2 , cv ) , cv, u2 ) , c v, u2 ) , v, u2 ) , u2 , v ) 353 1 1 then det(D) = −2 = 0, so by Corollary 4.5.15 the vectors v1 = (1, 1) 1 −1 and v2 = (1, −1) are linearly independent. Since dim[R2 ] = 2, it follows from Theorem 4.6.10 that {v1 , v2 } is a basis for R2 . 23. (a) If D = [v1 , v2 ] = (b) Let x = (x1 , x2 ) be an arbitrary vector in R2 . Since {v1 , v2 } forms a basis for R2 , there exist c1 and c2 such that (x1 , x2 ) = c1 (1, 1) + c2 (1, −1), that is, such that c1 + c2 = x1 , c1 − c2 = x2 . Solving this system yields c1 = 1 1 (x1 + x2 ), c2 = (x1 − x2 ). Thus, 2 2 (x1 , x2 ) = so that 1 1 (x1 + x2 )v1 + (x1 − x2 )v2 , 2 2 1 1 (x1 + x2 )v1 + (x1 − x2 )v2 2 2 1 1 (x1 + x2 )T (v1 ) + (x1 − x2 )T (v2 ) 2 2 1 1 (x1 + x2 )(2, 3) + (x1 − x2 )(−1, 1) 2 2 x1 3x2 + , 2x1 + x2 . 2 2 when (4, −2) is substituted for (x1 , x2 ), it follows that T (4, −2) = (−1, 6). T [(x1 , x2 )] = T = = = In particular, 24. The matrix of T is the 4 × 2 matrix [T (e1 ), T (e2 )]. Therefore, we must determine T (1, 0) and T (0, 1), which we can determine from the given information by using the linear transformation properties. A quick calculation shows that (1, 0) = − 2 (−1, 1) + 1 (1, 2), so 3 3 2 1 2 1 2 1 515 T (1, 0) = T (− (−1, 1)+ (1, 2)) = − T (−1, 1)+ T (1, 2) = − (1, 0, −2, 2)+ (−3, 1, 1, 1) = (− , , , −1). 3 3 3 3 3 3 333 Similarly, we have (0, 1) = 1 (−1, 1) + 1 (1, 2), so 3 3 1 1 1 1 1 1 21 1 T (0, 1) = T ( (−1, 1) + (1, 2)) = T (−1, 1) + T (1, 2) = (1, 0, −2, 2) + (−3, 1, 1, 1) = (− , , − , 1). 3 3 3 3 3 3 33 3 Therefore, we have the matrix of T : −5/3 −2/3 1/3 1/3 5/3 −1/3 . −1 1 25. The matrix of T is the 2 × 4 matrix [T (e1 ), T (e2 ), T (e3 ), T (e4 )]. Therefore, we must determine T (1, 0, 0, 0), T (0, 1, 0, 0), T (0, 0, 1, 0), and T (0, 0, 0, 1), which we can determine from the given information by using the linear transformation properties. We are given that T (1, 0, 0, 0) = (3, −2). 354 Next, T (0, 1, 0, 0) = T (1, 1, 0, 0) − T (1, 0, 0, 0) = (5, 1) − (3, −2) = (2, 3), T (0, 0, 1, 0) = T (1, 1, 1, 0) − T (1, 1, 0, 0) = (−1, 0) − (5, 1) = (−6, −1), and T (0, 0, 0, 1) = T (1, 1, 1, 1) − T (1, 1, 1, 0) = (2, 2) − (−1, 0) = (3, 2). Therefore, we have the matrix of T : 3 −2 2 −6 3 −1 3 2 . 26. The matrix of T is the 3 × 3 matrix [T (e1 ), T (e2 ), T (e3 )]. Therefore, we must determine T (1, 0, 0), T (0, 1, 0), and T (0, 0, 1), which we can determine from the given information by using the linear transformation properties. A quick calculation shows that (1, 0, 0) = (1, 2, 0) − 6(0, 1, 1) + 2(0, 2, 3), so T (1, 0, 0) = T (1, 2, 0) − 6T (0, 1, 1) + 2T (0, 2, 3) = (2, −1, 1) − 6(3, −1, −1) + 2(6, −5, 4) = (32, −5, 4). Similarly, (0, 1, 0) = 3(0, 1, 1) − (0, 2, 3), so T (0, 1, 0) = 3T (0, 1, 1) − T (0, 2, 3) = 3(3, −1, −1) − (6, −5, 4) = (3, 2, −7). Finally, (0, 0, 1) = −2(0, 1, 1) + (0, 2, 3), so T (0, 0, 1) = −2T (0, 1, 1) + T (0, 2, 3) = −2(3, −1, −1) + (6, −5, 4) = (0, −3, 6). Therefore, we have the matrix of T : 32 3 0 −5 2 −3 . 4 −7 6 27. The matrix of T is the 4 × 3 matrix [T (e1 ), T (e2 ), T (e3 )]. Therefore, we must determine T (1, 0, 0), T (0, 1, 0), and T (0, 0, 1), which we can determine from the given information by using the linear transforma1 tion properties. A quick calculation shows that (1, 0, 0) = 4 (0, −1, 4) − 1 (0, 3, 3) + 1 (4, 4, −1), so 4 4 T (1, 0, 0) = 1 1 1 1 1 311 1 T (0, −1, 4)− T (0, 3, 3)+ T (4, 4, −1) = (2, 5, −2, 1)− (−1, 0, 0, 5)+ (−3, 1, 1, 3) = (0, , − , − ). 4 4 4 4 4 4 244 1 Similarly, (0, 1, 0) = − 5 (0, −1, 4) + 4 15 (0, 3, 3), so 1 4 2 2 17 T (0, 1, 0) = − (2, 5, −2, 1) + (−1, 0, 0, 5) = (− , −1, , ). 5 15 3 5 15 Finally, (0, 0, 1) = 1 (0, −1, 4) + 5 T (0, 0, 1) = 1 15 (0, 3, 3), so 1 1 1 1 1 28 T (0, −1, 4) + T (0, 3, 3) = (2, 5, −2, 1) + (−1, 0, 0, 5) = ( , 1, − , ). 5 15 5 15 3 5 15 Therefore, we have the matrix of T : 0 3/2 −1/4 −1/4 −2/3 1/3 −1 1 . 2/5 −2/5 17/15 8/15 355 28. T (ax2 + bx + c) = aT (x2 )+ bT (x)+ cT (1) = a(3x +2)+ b(x2 − 1)+ c(x +1) = bx2 +(3a + c)x +(2a − b + c). 29. Using the linearity of T , we have T (2v1 + 3v2 ) = v1 + v2 and T (v1 + v2 ) = 3v1 − v2 . That is, 2T (v1 ) + 3T (v2 ) = v1 + v2 and T (v1 ) + T (v2 ) = 3v1 − v2 . Solving this system for the unknowns T (v1 ) and T (v2 ), we obtain T (v2 ) = 3v2 − 5v1 and T (v1 ) = 8v1 − 4v2 . 30. Since T is a linear transformation we obtain: T (x2 ) − T (1) = x2 + x − 3, 2T (x) = 4x, (30.1) (30.2) 3T (x) + 2T (1) = 2(x + 3) = 2x + 6. (30.3) From Equation (30.2) it follows that T (x) = 2x, so upon substitution into Equation (30.3) we have 3(2x) + 2T (1) = 2(x + 3) or T (1) = −2x + 3. Substituting this last result into Equation (30.1) yields T (x2 ) − (−2x + 3) = x2 + x − 3 so T (x2 ) = x2 − x. Now if a, b and c are arbitrary real numbers, then T (ax2 + bx + c) = aT (x2 ) + bT (x) + cT (1) = a(x2 − x) + b(2x) + c(−2x + 3) = ax2 − ax + 2bx − 2cx + 3c = ax2 + (−a + 2b − 2c)x + 3c. 31. Let v ∈ V . Since {v1 , v2 } is a basis for V , there exists a, b ∈ R such that v = av1 + bv2 . Hence T (v) = T (av1 + bv2 ) = aT (v1 ) + bT (v2 ) = a(3v1 − v2 ) + b(v1 + 2v2 ) = 3av1 − av2 + bv1 + 2bv2 = (3a + b)v1 + (2b − a)v2 . 32. Let v be any vector in V . Since {v1 , v2 , . . . , vk } spans V , we can write v = c1 v1 + c2 v2 + · · · + ck vk for suitable scalars c1 , c2 , . . . , ck . Then T (v) = T (c1 v1 + c2 v2 + · · · + ck vk ) = c1 T (v1 ) + c2 T (v2 ) + · · · + ck T (vk ) = c1 S (v1 ) + c2 S (v2 ) + · · · + ck S (vk ) = S (c1 v1 + c2 v2 + · · · + ck vk ) = S (v), as required. 33. Let v be any vector in V . Since {v1 , v2 , . . . , vk } is a basis for V , we can write v = c1 v1 + c2 v2 + · · · + ck vk for suitable scalars c1 , c2 , . . . , ck . Then T (v) = T (c1 v1 + c2 v2 + · · · + ck vk ) = c1 T (v1 ) + c2 T (v2 ) + · · · + ck T (vk ) = c1 0 + c2 0 + . . . ck 0 = 0, as required. 34. Let v1 and v2 be arbitrary vectors in V . Then, (T1 + T2 )(v1 + v2 ) = T1 (v1 + v2 ) + T2 (v1 + v2 ) = T1 (v1 ) + T1 (v2 ) + T2 (v1 ) + T2 (v2 ) = T1 (v1 ) + T2 (v1 ) + T1 (v2 ) + T2 (v2 ) = (T1 + T2 )(v1 ) + (T1 + T2 )(v2 ). Further, if k is any scalar, then (T1 + T2 )(k v) = T1 (k v) + T2 (k v) = kT1 (v) + kT2 (v) = k [T1 (v) + T2 (v)] = k (T1 + T2 )(v). It follows that T1 + T2 is a linear transformation. Now consider the transformation cT , where c is an arbitrary scalar. (cT )(v1 + v2 ) = cT (v1 + v2 ) = c[T (v1 ) + T (v2 )] = cT (v1 ) + cT (v2 ) = (cT )(v1 ) + (cT )(v2 ). 356 (cT )(k v1 ) = cT (k v1 ) = c[kT (v1 )] = (ck )T (v1 ) = (kc)T (v1 ) = k [cT (v1 )]. Thus, cT is a linear transformation. 35. (T1 + T2 )(x) = T1 (x) + T2 (x) = Ax + B x = (A + B )x = 5 6 2 −2 x1 x2 = 5x1 + 6x2 2x1 − 2x2 . Hence, (T1 + T2 )(x1 , x2 ) = (5x1 + 6x2 , 2x1 − 2x2 ). (cT1 )(x) = cT1 (x) = c(Ax) = (cA)x = 3c c −c 2c x1 x2 = 3cx1 + cx2 −cx1 + 2cx2 . Hence, (cT1 )(x1 , x2 ) = (3cx1 + cx2 , −cx1 + 2cx2 ). 36. (T1 + T2 )(x) = T1 (x) + T2 (x) = Ax + B x = (A + B )x. (cT1 )(x) = cT1 (x) = c(Ax) = (cA)x. 37. Problem 34 establishes that if T1 and T2 are in L(V, W ) and c is any scalar, then T1 + T2 and cT1 are in L(V, W ). Consequently, Axioms (A1) and (A2) are satisﬁed. A3: Let v be any vector in L(V, W ). Then (T1 + T2 )(v) = T1 (v) + T2 (v) = T2 (v) + T1 (v) = (T2 + T1 )(v). Hence T1 + T2 = T2 + T1 , therefore the addition operation is commutative. A4: Let T3 ∈ L(V, W ). Then [(T1 + T2 ) + T3 ] (v) = (T1 + T2 )(v) + T3 (v) = [T1 (v) + T2 (v)] + T3 (v) = T1 (v) + [T2 (v) + T3 (v)] = T1 (v) + (T2 + T3 )(v) = [T1 + (T2 + T3 )](v). Hence (T1 + T2 ) + T3 = T1 + (T2 + T3 ), therefore the addition operation is associative. A5: The zero vector in L(V, W ) is the zero transformation, O : V → W , deﬁned by O(v) = 0, for all v in V, where 0 denotes the zero vector in V . To show that O is indeed the zero vector in L(V, W ), let T be any transformation in L(V, W ). Then (T + O)(v) = T (v) + O(v) = T (v) + 0 = T (v) for all v ∈ V, so that T + O = T . A6: The additive inverse of the transformation T ∈ L(V, W ) is the linear transformation −T deﬁned by −T = (−1)T , since [T + (−T )](v) = T (v) + (−T )(v) = T (v) + (−1)T (v) = T (v) − T (v) = 0, for all v ∈ V , so that T + (−T ) = O. A7-A10 are all straightforward veriﬁcations. Solutions to Section 5.2 True-False Review: 1. FALSE. For example, T (x1 , x2 ) = (0, 0) is a linear transformation that maps every line to the origin. 2. TRUE. All of the matrices Rx , Ry , Rxy , LSx , LSy , Sx , and Sy discussed in this section are elementary matrices. 357 3. FALSE. A shear parallel to the x-axis composed with a shear parallel to the y -axis is given by matrix 1k 10 1 + kl k = , which is not a shear. 01 l1 l 1 4. TRUE. This is explained prior to Example 5.2.1. 5. FALSE. For example, Rxy · Rx = 0 1 1 0 1 0 0 −1 0 l k 0 0 −1 1 0 = , and this matrix is not in the form of a stretch. 6. FALSE. For example, k 0 0 1 1 0 = 0 l is not a stretch. Problems: 1. T (1, 1) = (1, −1), T (2, 1) = (1, −2), T (2, 2) = (2, −2), T (1, 2) = (2, −1). y 2 1 x 1 2 -1 -2 Figure 69: Figure for Problem 1 2. T (1, 1) = (0, 3), T (2, 1) = (1, 4), T (2, 2) = (0, 6), T (1, 2) = (−1, 5). y 6 5 4 3 2 1 x -2 -1 1 2 Figure 70: Figure for Problem 2 3. T (1, 1) = (2, 0), T (2, 1) = (3, −1), T (2, 2) = (4, 0), T (1, 2) = (3, 1). 358 y 2 1 x 1 2 3 4 -1 Figure 71: Figure for Problem 3 4. T (1, 1) = (−4, −2), T (2, 1) = (−6, −4), T (2, 2) = (−8, −4), T (1, 2) = (−6, −2). y 2 1 x -8 -6 -4 -2 1 2 -2 -4 Figure 72: Figure for Problem 4 1 0 5. A = 6. 0 2 2 0 2 1 1 ∼ 1. P12 =⇒ T (x) = Ax corresponds to a shear parallel to the x-axis. 2 0 0 0 2 ∼ 2. M1 (1/2) 1 0 0 2 3 ∼ 3. M2 (1/2). 1 0 0 1 So, T (x) = Ax = P12 M1 (2)M2 (2)x which corresponds to a stretch in the y -direction, followed by a stretch in the x-direction, followed by a reﬂection in y = x. 7. A = 8. 1 3 −1 0 0 −1 0 1 =⇒ T (x) = Ax corresponds to a shear parallel to the y -axis. 1 ∼ 1 0 0 −1 2 ∼ 1 0 0 1 1. M1 (−1) 2. M2 (−1) So, T (x) = Ax = M1 (−1)M2 (−1)x which corresponds to a reﬂection in the x-axis, followed by a reﬂection in the y -axis. 9. 1 −3 −2 8 1 ∼ 1 −3 0 2 2 ∼ 1 −3 0 1 3 ∼ 1 0 0 1 1. A12 (2) 2. M2 (1/2) 3. A21 (3). So, T (x) = Ax = A12 (−2)M2 (2)A21 (−3)x which corresponds to a shear parallel to the x-axis, followed by a stretch in the y -direction, followed by a shear parallel to the y -axis. 359 10. 1 3 2 4 1 2 0 −2 1 ∼ 1 0 0 −2 2 ∼ 3 ∼ 1 0 0 1 1. A12 (−3) 2. A21 (1) 3. M2 (−1/2). So, T (x) = Ax = A12 (3)A21 (−1)M2 (−2)x which corresponds to a reﬂection in the x-axis followed by a stretch in they y -direction, followed by a shear parallel to the x-axis, followed by a shear parallel to the y -axis. 1 0 10 = 0 −2 02 a stretch in the y -direction. 1 0 0 −1 11. 12. −1 −1 −1 0 1 ∼ −1 −1 0 1 . So T (x) = Ax corresponds to a reﬂection in the x-axis followed by 1 0 2 ∼ 1 1 1 0 3 ∼ 0 1 1. A12 (−1) 2. M1 (−1) 3. A21 (−1). So, T (x) = Ax = A12 (1)M1 (−1)A21 (1)x which corresponds to a shear parallel to the x-axis, followed by a reﬂection in the y -axis, followed by a shear parallel to the y -axis. 13. R(θ) = cos θ 0 0 1 1 sin θ 0 1 1 0 = cos θ 0 0 1 1 sin θ 0 1 1 − tan θ 0 1 0 sec θ 1 − tan θ 0 sec θ cos θ 0 = 0 1 − tan θ cos θ 1 sin θ cos θ − sin θ sin θ cos θ which coincides with the matrix of the transformation of R2 corresponding to a rotation through an angle θ in the counter-clockwise direction. π 0 −1 14. The matrix for a counter-clockwise rotation through an angle θ = is . Now, 1 0 2 = 0 −1 1 0 1 ∼ 1 0 0 −1 2 ∼ 1 0 0 1 1. P12 2. M2 (−1) So, T (x) = Ax = P12 M2 (−1)x which corresponds to a reﬂection in the x-axis followed by a reﬂection in y = x. Solutions to Section 5.3 True-False Review: 1. FALSE. The statement should read dim[Ker(T )] + dim[Rng(T )] = dim[V ], not dim[W ] on the right-hand side. 2. FALSE. As a speciﬁc illustration, we could take T : P4 → R7 deﬁned by T (a0 + a1 x + a2 x2 + a3 x3 + a4 x4 ) = (a0 , a1 , a2 , a3 , a4 , 0, 0), 360 and it is easy to see that T is a linear transformation with Ker(T ) = {0}. Therefore, Ker(T ) is 0-dimensional. 3. FALSE. The solution set to the homogeneous linear system Ax = 0 is Ker(T ), not Rng(T ). 4. FALSE. Rng(T ) is a subspace of W , not V , since it consists of vectors of the form T (v), and these belong to W . 5. TRUE. From the given information, we see that Ker(T ) is at least 2-dimensional, and therefore, since M23 is 6-dimensional, the Rank-Nullity Theorem requires that Rng(T ) have dimension at most 6 − 2 = 4. 6. TRUE. Any vector of the form T (v) where v belongs to Rn can be written as Av, and this in turn can be expressed as a linear combination of the columns of A. Therefore, T (v) belongs to colspace(A). Problems: 7 5 = 0 =⇒ (7, 5, −1) ∈ Ker(T ). 1. T (7, 5, −1) = 0 −1 −21 −2 1 −1 2 −15 = =⇒ (−21, −15, 2) ∈ Ker(T ). / T (−21, −15, 2) = 3 1 −2 −3 2 35 0 1 −1 2 25 = =⇒ (35, 25, −5) ∈ Ker(T ). T (35, 25, −5) = 0 1 −2 −3 −5 1 −1 2 1 −2 −3 2. Ker(T ) = {x ∈ R2 : T (x) = 0} = {x ∈ R2 : Ax = 0}. The augmented matrix of the system Ax = 0 is: 120 360 , with reduced row-echelon form of . It follows that 120 000 Ker(T ) = {x ∈ R2 : x = (−2t, t), t ∈ R} = {x ∈ R2 : x = t(−2, 1), t ∈ R}. Geometrically, this is a line in R2 . It is the subspace of R2 spanned by the vector (−2, 1). dim[Ker(T )] = 1. For the given transformation, Rng(T ) = colspace(A). From the preceding reduced row-echelon form of A, we see that colspace(A) is generated by the ﬁrst column vector of A. Consequently, Rng(T ) = {y ∈ R2 : y = r(3, 1), r ∈ R}. Geometrically, this is a line in R2 . It is the subspace of R2 spanned by the vector (3, 1). dim[Rng(T )] = 1. Since dim[Ker(T )]+ dim[Rng(T )] = 2 = dim[R2 ], Theorem 5.3.8 is satisﬁed. 3. Ker(T ) = {x ∈ R3 : T (x) = 0} = {x ∈ R3 : Ax = 0}. The augmented matrix of the system Ax = 0 1 −1 0 0 1000 1 2 0 , with reduced row-echelon form 0 1 0 0 . Thus x1 = x2 = x3 = 0, so is: 0 2 −1 1 0 0010 Ker(T ) = {0}. Geometrically, this describes a point (the origin) in R3 . dim[Ker(T )] = 0. For the given transformation, Rng(T ) = colspace(A). From the preceding reduced row-echelon form of A, we see that colspace(A) is generated by the ﬁrst three column vectors of A. Consequently, Rng(T ) = R3 , dim[Rng(T )] = dim[R3 ] = 3, and Theorem 5.3.8 is satisﬁed since dim[Ker(T )]+ dim[Rng(T )] = 0 + 3 = dim[R3 ]. 4. Ker(T ) = {x ∈ R3 : T (x) = 0} = {x ∈ R3 : Ax = 0}. The augmented matrix of the system Ax = 0 is: 361 1 −2 10 1 0 −5 0 2 −3 −1 0 , with reduced row-echelon form of 0 1 −3 0 . Thus 5 −8 −1 0 00 0 0 Ker(T ) = {x ∈ R3 : x = t(5, 3, 1), t ∈ R}. Geometrically, this describes the line in R3 through the origin, spanned by (5, 3, 1). dim[Ker(T )] = 1. For the given transformation, Rng(T ) = colspace(A). From the preceding reduced row-echelon form of A, we see that a basis for colspace(A) is given by the ﬁrst two column vectors of A. Consequently, Rng(T ) = {y ∈ R3 : y = r(1, 2, 5) + s(−2, −3, −8), r, s ∈ R}. Geometrically, this is a plane through the origin in R3 . dim[Rng(T )] = 2 and Theorem 5.3.8 is satisﬁed since dim[Ker(T )]+ dim[Rng(T )] = 1 + 2 = 3 = dim[R3 ]. 5. Ker(T ) = {x ∈ R3 : T (x) = 0} = {x ∈ R3 : Ax = 0}. The augmented matrix of the system Ax = 0 is: 1 −1 2 0 1 −1 20 , with reduced row-echelon form of . Thus −3 3 −6 0 0 000 Ker(T ) = {x ∈ R3 : x = r(1, 1, 0) + s(−2, 0, 1), r, s ∈ R}. Geometrically, this describes the plane through the origin in R3 , which is spanned by the linearly independent set {(1, 1, 0), (−2, 0, 1)}. dim[Ker(T )] = 2. For the given transformation, Rng(T ) = colspace(A). From the preceding reduced row-echelon form of A, we see that a basis for colspace(A) is given by the ﬁrst column vector of A. Consequently, Rng(T ) = {y ∈ R2 : y = t(1, −3), t ∈ R}. Geometrically, this is the line through the origin in R2 spanned by (1, −3). dim[Rng(T )] = 1 and Theorem 5.3.8 is satisﬁed since dim[Ker(T )]+ dim[Rng(T )] = 2 + 1 = 3 = dim[R3 ]. 6. Ker(T ) = {x ∈ R3 : T (x) = 0} = {x ∈ R3 : Ax = 0}. The augmented matrix of the system Ax = 0 is: 1320 1300 , with reduced row-echelon form of . Thus 2650 0010 Ker(T ) = {x ∈ R3 : x = r(−3, 1, 0), r ∈ R}. Geometrically, this describes the line through the origin in R3 , which is spanned by (−3, 1, 0). dim[Ker(T )] = 1. For the given transformation, Rng(T ) = colspace(A). From the preceding reduced row-echelon form of A, we see that a basis for colspace(A) is given by the ﬁrst and third column vectors of A. Consequently, Rng(T ) = span{(1, 2), (2, 5)} = R2 , so that dim[Rng(T )] = 2. Geometrically, Rng(T ) is the xy -plane, and Theorem 5.3.8 is satisﬁed since dim[Ker(T )]+ dim[Rng(T )] = 1 + 2 = 3 = dim[R3 ]. −5/3 −2/3 1/3 1/3 7. The matrix of T in Problem 24 of Section 5.1 is A = 5/3 −1/3 . Thus, −1 1 Ker(T ) = nullspace(A) = {0} 362 and −2/3 −5/3 1/3 , 1/3 . Rng(T ) = colspace(A) = span 5/3 −1/3 1 −1 3 −2 8. The matrix of T in Problem 25 of Section 5.1 is A = 2 −6 3 −1 3 2 . Thus, Ker(T ) = nullspace(A) = span{(16/13, 15/13, 1, 0), (−5/13, −12/13, 0, 1)} and Rng(T ) = colspace(A) = R2 . 32 3 0 2 −3 . Thus, 9. The matrix of T in Problem 26 of Section 5.1 is A = −5 4 −7 6 Ker(T ) = nullspace(A) = {0} and Rng(T ) = colspace(A) = R3 . 0 3/2 10. The matrix of T in Problem 27 of Section 5.1 is A = 0 −1/4 −2/3 1/3 −1 1 . Thus, 2/5 −2/5 17/15 8/15 Ker(T ) = nullspace(A) = {0} and Rng(T ) = colspace(A) = span −2/3 0 −1 3/2 , 0 2/5 −1/4 17/15 1/3 1 . , −2/5 8/15 11. (a) Ker(T ) = {v ∈ R3 : u, v = 0}. For v to be in the kernel of T , u and v must be orthogonal. Since u is any ﬁxed vector in R3 , then v must lie in the plane orthogonal to u. Hence dim[Ker(T )] = 2. (b) Rng(T ) = {y ∈ R : y = u, v , v ∈ R3 }, and dim[Rng(T )] = 1. 12. (a) Ker(S ) = {A ∈ Mn (R) : A − AT = 0} = {A ∈ Mn (R) : A = AT }. Hence, any matrix in Ker(S ) is symmetric by deﬁnition. (b) Since any matrix in Ker(S ) is symmetric, it has been shown that {A1 , A2 , A3 } is a spanning set for the set of all symmetric matrices in M2 (R), where A1 = Thus, dim[Ker(S )] = 3. 1 0 0 0 , A2 = 0 1 1 0 , A3 = 0 0 0 1 . 363 13. Ker(T ) = {A ∈ Mn (R) : AB − BA = 0} = {A ∈ M2 (R) : AB = BA}. This is the set of matrices that commute with B . 14. (a) Ker(T ) = {p ∈ P2 : T (p) = 0} = {ax2 + bx + c ∈ P2 : ax2 + (a + 2b + c)x + (3a − 2b − c) = 0, for all x}. Thus, for p(x) = ax2 + bx + c to be in Ker(T ), a, b, and c must satisfy the system: a =0 a + 2b + c = 0 3a − 2b − c = 0 Solving this system, we obtain that a = 0 and c = −2b. Consequently, all polynomials of the form 0x2 + bx + (−2b) are in Ker(T ), so Ker(T ) = {b(x − 2) : b ∈ R}. Since Ker(T ) is spanned by the nonzero vector x − 2, it follows that dim[Rng(T )] = 1. (b) In this case, Rng(T ) = {T (ax2 + bx + c) : a, b, c ∈ R} = {ax2 + (a + 2b + c)x + (3a − 2b − c) : a, b, c ∈ R} = {a(x2 + x + 3) + b(2x − 2) + c(x − 1) : a, b, c ∈ R} = {a(x2 + x + 3) + (2b + c)(x − 1) : a, b, c ∈ R} = span{x2 + x + 3, x − 1}. Since the vectors in this spanning set are linearly independent on any interval, it follows that the spanning set is a basis for Rng(T ). Hence, dim[Rng(T )] = 2. 15. Ker(T ) = {p ∈ P2 : T (p) = 0} = {ax2 + bx + c ∈ P2 : (a + b) + (b − c)x = 0, for all x}. Thus, a, b, and c must satisfy: a + b = 0 and b − c = 0 =⇒ a = −b and b = c. Letting c = r ∈ R, we have ax2 + bx + c = r(−x2 + x + 1). Thus, Ker(T ) = {r(−x2 + x + 1) : r ∈ R} and dim[Ker(T )] = 1. Rng(T ) = {T (ax2 + bx + c) : a, b, c ∈ R} = {(a + b) + (b − c)x : a, b, c ∈ R} = {c1 + c2 x : c1 , c2 ∈ R}. Consequently, a basis for Rng(T ) is {1, x}, so that Rng(T ) = P1 , and dim[Rng(T )] = 2. 16. Ker(T ) = {p ∈ P1 : T (p) = 0} = {ax + b ∈ P1 : (b − a) + (2b − 3a)x + bx2 = 0, for all x}. Thus, a and b must satisfy: b−a = 0, 2b−3a = 0, and b = 0 =⇒ a = b = 0. Thus, Ker(T ) = {0} and dim[Ker(T )] = 0. Rng(T ) = {T (ax + b) : a, b ∈ R} = {(b − a) + (2b − 3a)x + bx2 : a, b ∈ R} = {−a(1 + 3x) + b(1 + 2x + x2 ) : a, b ∈ R} = span{1 + 3x, 1 + 2x + x2 }. Since the vectors in this spanning set are linearly independent on any interval, it follows that the spanning set is a basis for Rng(T ), and dim[Rng(T )] = 2. 17. T (v) = 0 ⇐⇒ T (av1 + bv2 + cv3 ) = 0 ⇐⇒ aT (v1 ) + bT (v2 ) + cT (v3 ) = 0 ⇐⇒ a(2w1 − w2 ) + b(w1 − w2 ) + c(w1 + 2w2 ) = 0 ⇐⇒ (2a + b + c)w1 + (−a − b + 2c)w2 = 0 ⇐⇒ 2a + b + c = 0 and a − b + 2c = 0. 364 Reducing the augmented matrix of the system yields: 2 110 1 −1 2 0 ∼ 1 −1 2 0 2 110 ∼ 1 0 0 10 1 −1 0 . Setting c = r =⇒ b = r, a = −r. Thus, Ker(T ) = {v ∈ V : v = r(−v1 + v2 + v3 ), r ∈ R} and dim[Ker(T )] = 1. Rng(T ) = {T (v) : v ∈ V } = {(2a + b + c)w1 + (−a − b + 2c)w2 : a, b, c ∈ V } = span{w1 , w2 } = W . Consequently, dim[Rng(T )] = 2. 18. (a) If w ∈ Rng(T ), then T (v) = w for some v ∈ V , and since {v1 , v2 , . . . , vn } is a basis for V , there exist c1 , c2 , . . . , cn ∈ R for which v = c1 v1 + c2 v2 + · · · + cn vn . Accordingly, w = T (v) = a1 T (v1 ) + a2 T (v2 ) + · · · + an T (vn ). Thus, Rng(T ) = span{T (v1 ), T (v2 ), . . . , T (vn )}. We must show that {T (v1 ), T (v2 ), . . . , T (vn )} is also linearly independent. Suppose that b1 T (v1 ) + b2 T (v2 ) + · · · + bn T (vn ) = 0. Then T (b1 v1 + b2 v2 + · · · + bn vn ) = 0, so since Ker(T ) = {0}, b1 v1 + b2 v2 + · · · + bn vn = 0. Since {v1 , v2 , . . . , vn } is a linearly independent set, the preceding equation implies that b1 = b2 = · · · = bn = 0. Consequently, {T (v1 ), T (v2 ), . . . , T (vn )} is a linearly independent set in W . Therefore, since we have already shown that it is a spanning set for Rng(T ), {T (v1 ), T (v2 ), . . . , T (vn )} is a basis for Rng(T ). (b) As an example, let T : R3 → R2 be deﬁned by T ((a, b, c)) = (a, b) for all (a, b, c) in R3 . Then for the basis {e1 , e2 , e3 } of R3 , we have {T (e1 ), T (e2 ), T (e3 )} = {(1, 0), (0, 1), (0, 0)}, which is clearly not a basis for R2 . Solutions to Section 5.4 True-False Review: 1. FALSE. Many one-to-one linear transformations T : P3 → M32 can be constructed. One possible example would be to deﬁne a0 a1 T (a0 + a1 x + a2 x2 + a3 x3 ) = a2 a3 . 00 It is easy to check that with T so deﬁned, T is a one-to-one linear transformation. 2. TRUE. We can deﬁne an isomorphism T a T 0 0 : V → M32 via bc ab d e = c d . 0f ef With T so deﬁned, it is easy to check that T is an isomorphism. 3. TRUE. Both Ker(T1 ) and Ker(T2 T1 ) are subspaces of V1 , and since if T1 (v1 ) = 0, then (T2 T1 )v1 = T2 (T1 (v1 )) = T2 (0) = 0, we see that every vector in Ker(T1 ) belongs to Ker(T2 T1 ). Therefore, Ker(T1 ) is a subspace of Ker(T2 T1 ). 365 4. TRUE. Observe that T is not one-to-one since Ker(T ) = {0} (because Ker(T ) is 1-dimensional). Moreover, since M22 is 4-dimensional, then by the Rank-Nullity Theorem, Rng(T ) is 3-dimensional. Since P2 is 3-dimensional, we conclude that Rng(T ) = P2 ; that is, T is onto. 5. TRUE. Since M2 (R) is 4-dimensional, Rng(T ) can be at most 4-dimensional. However, P4 is 5dimensional. Therefore, any such linear transformation T cannot be onto. 6. TRUE. If we assume that (T2 T1 )v = (T2 T1 )w, then T2 (T1 (v)) = T2 (T1 (w)). Since T2 is one-to-one, then T1 (v) = T1 (w). Next, since T1 is one-to-one, we conclude that v = w. Therefore, T2 T1 is one-to-one. 7. FALSE. This linear transformation is onto one-to-one. The reason is essentially because the derivative of any constant is zero. Therefore, Ker(T ) consists of all constant functions, and therefore, Ker(T ) = {0}. 8. TRUE. Since M23 is 6-dimensional and Rng(T ) is only 4-dimensional, T is not onto. Moreover, since P3 is 4-dimensional, the Rank-Nullity Theorem implies that Ker(T ) is 0-dimensional. Therefore, Ker(T ) = {0}, and this means that T is one-to-one. 9. TRUE. Recall that dim[Rn ] = n and dim[Rm ] = m. In order for such an isomorphism to exist, Rn and Rm must have the same dimension; that is, m = n. 10. FALSE. For example, the vector space of all polynomials with real coeﬃcients is an inﬁnite-dimensional real vector space, and since Rn is ﬁnite-dimensional for all positive integers n, this statement is false. 11. FALSE. In order for this to be true, it would also have to be assumed that T1 is onto. For example, suppose V1 = V2 = V3 = R2 . If we deﬁne T2 (x, y ) = (x, y ) for all (x, y ) in R2 , then T2 is onto. However, if we deﬁne T1 (x, y ) = (0, 0) for all (x, y ) in R2 , then (T2 T1 )(x, y ) = T2 (T1 (x, y )) = T2 (0, 0) = (0, 0) for all (x, y ) in R2 . Therefore T2 T1 is not onto, since Rng(T2 T1 ) = {(0, 0)}, even though T2 itself is onto. 12. TRUE. This is a direct application of the Rank-Nullity Theorem. Since T is assumed to be onto, Rng(T ) = R3 , which is 3-dimensional. Therefore the dimension of Ker(T ) is 8 − 3 = 5. Problems: 1. T1 T2 (x) = T1 (T2 (x)) = T1 (B x) = (AB )x, −1 2 15 −5 −5 AB = = 31 −2 0 1 15 −5 −5 x1 T1 T2 (x) = (AB )x = = 1 15 x2 14 7 x1 T2 T1 (x) = (BA)x = = 2 −4 x2 Clearly, T1 T2 = T2 T1 . 2. T2 (T1 (x)) = B (A(x)) = −1 1 so T1 T2 = AB . Similarly, T2 T1 = BA. 15 −1 2 14 7 . BA = = . −2 0 31 2 −4 −5x1 − 5x2 = (−5(x1 + x2 ), x1 + 15x2 ) and x1 + 15x2 14x1 + 7x2 = (7(2x1 + x2 ), 2(x1 − 2x2 )). 2x1 − 4x2 1 −1 3 2 x1 x2 = −1 1 x1 − x2 3x1 + 2x2 = 2x1 + 3x2 = 2 3 x. T1 T2 does not exist because T1 must have a domain of R2 , yet the range of T2 , which must be the domain of T1 , is R. 1 −1 x1 0 = . This matrix equation results in 2 −2 x2 0 the system: x1 − x2 = 0 and 2x1 − 2x2 = 0, or equivalently, x1 = x2 . Thus, 3. Ker(T1 ) = {x ∈ R2 : Ax = 0}. Ax = 0 =⇒ Ker(T1 ) = {x ∈ R2 : x = r(1, 1) where r ∈ R}. Geometrically, this is the line through the origin in R2 spanned by (1, 1). 366 2 1 x1 0 = . This matrix equation results in the 3 −1 x2 0 system: 2x1 + x2 = 0 and 3x1 − x2 = 0, or equivalently, x1 = x2 = 0. Thus, Ker(T2 ) = {0}. Geometrically, this is a point (the origin). 1 −1 2 1 x1 0 Ker(T1 T2 ) = {x ∈ R2 : (AB )x = 0}. (AB )x = 0 =⇒ = 2 −2 3 −1 x2 0 −1 2 x1 0 =⇒ = . This matrix equation results in the system: −2 4 x2 0 Ker(T2 ) = {x ∈ R2 : B x = 0}. B x = 0 =⇒ −x1 + 2x2 = 0 and − 2x1 + 4x2 = 0, or equivalently, x1 = 2x2 . Thus, Ker(T1 T2 ) = {x ∈ R2 : x = s(2, 1) where s ∈ R}. Geometrically, this is the line through the origin in R2 spanned by the vector (2, 1). 2 1 1 −1 x1 0 Ker(T2 T1 ) = {x ∈ R2 : (BA)x = 0}. (BA)x = 0 =⇒ = 3 −1 2 −2 x2 0 4 −4 x1 0 =⇒ = . This matrix equation results in the system: 1 −1 x2 0 4x1 − 4x2 = 0 and x1 − x2 = 0, or equivalently, x1 = x2 . Thus, Ker(T2 T1 ) = {x ∈ R2 : x = t(1, 1) where t ∈ R}. Geometrically, this is the line through the origin in R2 spanned by the vector (1, 1). 4. (T2 T1 )(A) = T2 (T1 (A)) = T2 (A − AT ) = (A − AT ) + (A − AT )T = (A − AT ) + (AT − A) = 0n . 5. (a) (T1 (f ))(x) = x (T2 (f ))(x) = a (T1 T2 )(f )(x) = d [sin(x − a)] = cos(x − a). dx x sin(t − a)dt = [− cos(t − a)]a = 1 − cos(x − a). d [1 − cos(x − a)] = sin(x − a) = f (x). dx x x (T2 T1 )(f )(x) = a cos(t − a)dt = [sin(t − a)]a = sin(x − a) = f (x). Consequently, (T1 T2 )(f ) = (T2 T1 )(f ) = f . x (b) (T1 T2 )(f ) = T1 (T2 (f )) = T1 f (x)dx a (T2 T1 )(g ) = T2 (T1 (g )) = T2 dg (x) dx x = a = d dx x f (x)dx = f (x). a dg (x) dx = g (x) − g (a). dx 6. Let v ∈ V . There exists a, b ∈ R such that v = av1 + bv2 . Then T2 T1 (v) = T2 [aT1 (v1 ) + bT1 (v2 )] = T2 [a(v1 − v2 ) + b(2v1 + v2 )] = T2 [(a + 2b)v1 + (b − a)v2 ] = (a + 2b)T2 (v1 ) + (b − a)T (v2 ) = (a + 2b)(v1 + 2v2 ) + (b − a)(3v1 − v2 ) = (5b − 2a)v1 + 3(a + b)v2 . 7. Let v ∈ V . There exists a, b ∈ R such that v = av1 + bv2 . Then T2 T1 (v) = T2 [aT1 (v1 ) + bT1 (v2 )] = T2 [a(3v1 + v2 )] = 3a(−5v2 ) + a(−v1 + 6v2 ) = −av1 − 9av2 . 367 8. Ker(T ) = {x ∈ R2 : T (x) = 0} = {x ∈ R2 : Ax = 0}. The augmented matrix of the system Ax = 0 420 100 is: , with reduced row-echelon form of . It follows that Ker(T ) = {0}. Hence T is 130 010 one-to-one by Theorem 5.4.7. For the given transformation, Rng(T ) = {y ∈ R2 : Ax = y is consistent}. The augmented matrix of the 1 0 (3y1 − y2 )/5 4 2 y1 . The system is, with reduced row-echelon form of system Ax = y is 1 3 y2 0 1 (2y2 − y1 )/5 therefore, consistent for all (y1 , y2 ), so that Rng(T ) = R2 . Consequently, T is onto. T −1 exists since T has been shown to be both one-to-one and onto. Using the Gauss-Jordan method for computing A−1 , we have 4 1 Thus, A−1 = 3 10 1 − 10 −1 5 2 5 21 30 0 1 ∼ 1 0 1 2 5 2 1 4 1 −4 0 1 ∼ 3 10 10 1 0 1 − 10 −1 5 2 5 . , so T −1 (y) = A−1 y. 9. Ker(T ) = {x ∈ R2 : T (x) = 0} = {x ∈ R2 : Ax = 0}. The augmented matrix of the system Ax = 0 is: 120 1 20 , with reduced row-echelon form of . It follows that −2 −4 0 000 Ker(T ) = {x ∈ R2 : x = t(−2, 1), t ∈ R}. By Theorem 5.4.7, since Ker(T ) = {0}, T is not one-to-one. This also implies that T −1 does not exist. For the given transformation, Rng(T ) = {y ∈ R2 : Ax = y is consistent}. The augmented matrix of the 1 2 y1 y1 12 system Ax = y is with reduced row-echelon form of . The last row of −2 −4 y2 0 0 2y 1 + y 2 this matrix implies that 2y1 + y2 = 0 is required for consistency. Therefore, it follows that Rng(T ) = {(y1 , y2 ) ∈ R2 : 2y1 + y2 = 0} = {y ∈ R2 : y = s(1, −2), s ∈ R}. T is not onto because dim[Rng(T )] = 1 = 2 = dim[R2 ]. 10. Ker(T ) = {x ∈ R3 : T (x) = 0} = {x ∈ R3 : Ax = 0}. The augmented matrix of the system Ax = 0 is: 1 0 −7 0 1 2 −1 0 , with reduced row-echelon form . It follows that 25 10 01 30 Ker(T ) = {(x1 , x2 , x3 ) ∈ R3 : x1 = 7t, x2 = −3t, x3 = t, t ∈ R} = {x ∈ R3 : x = t(7, −3, 1), t ∈ R}. By Theorem 5.4.7, since Ker(T ) = {0}, T is not one-to-one. This also implies that T −1 does not exist. For the given transformation, Rng(T ) = {y ∈ R2 : Ax = y is consistent}. The augmented matrix of the system Ax = y is 1 0 −7 5y1 − 2y2 1 2 −1 y1 ∼ . 25 1 y2 01 3 y 2 − 2y 1 We clearly have a consistent system for all y = (y1 , y2 ) ∈ R2 , thus Rng(T ) = R2 . Therefore, T is onto by Deﬁnition 5.3.3. 11. Reducing A to row-echelon form, we obtain 1 0 REF(A) = 0 0 3 1 0 0 5 2 . 0 0 368 We quickly ﬁnd that Ker(T ) = nullspace(A) = span{(1, −2, 1)}. Moreover, 1 0 34 Rng(T ) = colspace(A) = span , 4 5 2 1 . Based on these calculations, we see that T is neither one-to-one nor onto. 12. We have and so 2 1 1 (0, 0, 1) = − (2, 1, −3) + (1, 0, 0) + (0, 1, 0), 3 3 3 1 2 1 T (0, 0, 1) = − T (2, 1, −3) + T (1, 0, 0) + T (0, 1, 0) 3 3 3 1 2 1 = − (7, −1) + (4, 5) + (−1, 1) 3 3 3 = (0, 4). (a) From the given information and the above calculation, we ﬁnd that the matrix of T is A = 4 −1 0 5 14 . (b) Because A has more columns than rows, REF(A) must have an unpivoted column, which implies that nullspace(A) = {0}. Hence, T is not one-to-one. On the other hand, colspace(A) = R2 since the ﬁrst two columns, for example, are linearly independent vectors in R2 . Thus, T is onto. 13. Show T is a linear transformation: Let x, y ∈ V and c, λ ∈ R where λ = 0. T (x + y) = λ(x + y) = λx + λy = T (x) + T (y), and T (cx) = λ(cx) = c(λx) = cT (x). Thus, T is a linear transformation. Show T is one-to-one: x ∈ Ker(T ) ⇐⇒ T (x) = 0 ⇐⇒ λx = 0 ⇐⇒ x = 0, since λ = 0. Thus, Ker(T ) = {0}. By Theorem 5.4.7, T is one-to-one. Show T is onto: dim[Ker(T )]+ dim[Rng(T )] = dim[V ] =⇒ dim[{0}]+ dim[Rng(T )] = dim[V ] =⇒ 0 + dim[Rng(T )] = dim[V ] =⇒ dim[Rng(T )] = dim[V ] =⇒ Rng(T ) = V . Thus, T is onto by Deﬁnition 5.3.3. Find T −1 : 1 1 1 T −1 (x) = x since T (T −1 (x)) = T x =λ x = x. λ λ λ 14. T : P1 → P1 where T (ax + b) = (2b − a)x + (b + a). Show T is one-to-one: Ker(T ) = {p ∈ P1 : T (p) = 0} = {ax + b ∈ P1 : (2b − a)x + (b + a) = 0}. Thus, a and b must satisfy the system 2b − a = 0, b + a = 0. The only solution to this system is a = b = 0. Consequently, Ker(T ) = {0}, so T is one-to-one. Show T is onto: Since Ker(T ) = {0}, dim[Ker(T )] = 0. Thus, dim[Ker(T )]+ dim[Rng(T )] = dim[P1 ] =⇒ dim[Rng(T )] = dim[P1 ], and since Rng(T ) is a subspace of P1 , it follows that Rng(T ) = P1 . Consequently, T is onto by Theorem 5.4.7. 369 Determine T −1 : Since T is one-to-one and onto, T −1 exists. T (ax + b) = (2b − a)x + (b + a) =⇒ T −1 [(2b − a)x + (b + a)] = ax + b. If we let A = 2b − a and B = a + b, then b = 1 1 so that T −1 [Ax + B ] = (2B − A)x + (A + B ). 3 3 2B − A A+B and a = 3 3 15. T is not one-to-one: Ker(T ) = {p ∈ P2 : T (p) = 0} = {ax2 + bx + c : c + (a − b)x = 0}. Thus, a, b, and c must satisfy the system c = 0 and a − b = 0. Consequently, Ker(T ) = {r(x2 + x) : r ∈ R} = {0}. Thus, by Theorem 5.4.7, T is not one-to-one. T −1 does not exist because T is not one-to-one. T is onto: Since Ker(T ) = {r(x2 + x) : r ∈ R}, we see that dim[Ker(T )] = 1. Thus, dim[Ker(T )]+ dim[Rng(T )] = dim[P2 ] =⇒ 1+ dim[Rng(T )] = 3 =⇒ dim[Rng(T )] = 2. Since Rng(T ) is a subspace of P1 , 2 = dim[Rng(T )] ≤ dim[P1 ] = 2, and so equality holds: Rng(T ) = P1 . Thus, T is onto by Theorem 5.4.7. 16. {v1 , v2 } is a basis for V and T : V → V is a linear transformation. Show T is one-to-one: Any v ∈ V can be expressed as v = av1 + bv2 where a, b ∈ R. T (v) = 0 ⇐⇒ T (av1 + bv2 ) = 0 ⇐⇒ aT (v1 ) + bT (v2 ) = 0 ⇐⇒ a(v1 + 2v2 ) + b(2v1 − 3v2 ) = 0 ⇐⇒ (a + 2b)v1 + (2a − 3b)v2 = 0 ⇐⇒ a + 2b = 0 and 2a − 3b = 0 ⇐⇒ a = b = 0 ⇐⇒ v = 0. Hence Ker(T ) = {0}. Therefore T is one-to-one. Show T is onto: dim[Ker(T )] = dim[{0}] = 0 and dim[Ker(T )]+ dim[Rng(T )] = dim[V ] implies that 0+ dim[Rng(T )] = 2 or dim[Rng(T )] = 2. Since Rng(T ) is a subspace of V , it follows that Rng(T ) = V . Thus, T is onto by Theorem 5.4.7. Determine T −1 : Since T is one-to-one and onto, T −1 exists. T (av1 + bv2 ) = (a + 2b)v1 + (2a − 3b)v2 1 =⇒ T −1 [(a + 2b)v1 + (2a − 3b)v2 ] = (av1 + bv2 ). If we let A = a + 2b and B = 2a − 3b, then a = (3A + 2B ) 7 1 1 1 and b = (2A − B ). Hence, T −1 (Av1 + B v2 ) = (3A + 2B )v1 + (2A − B )v2 . 7 7 7 17. Let v ∈ V . Then there exists a, b ∈ R such that v = av1 + bv2 . a b (T1 T2 )v = T1 [aT2 (v1 ) + bT2 (v2 )] = T1 (v1 + v2 ) + (v1 − v2 ) 2 2 a+b a−b a+b a−b = T1 (v1 ) + T1 (v2 ) = (v1 + v2 ) + (v1 − v2 ) 2 2 2 2 a+b a−b a+b a−b = + v1 + − v2 = av1 + bv2 = v. 2 2 2 2 (T2 T1 )v = T2 [aT1 (v1 ) + bT1 (v2 )] = T2 [a(v1 + v2 ) + b(v1 − v2 )] = T2 [(a + b)v1 + (a − b)v2 ] = (a + b)T2 (v1 ) + (a − b)T2 (v2 ) a+b a−b = (v1 + v2 ) + (v1 − v2 ) = av1 + bv2 = v. 2 2 − Since (T1 T2 )v = v and (T2 T1 )v = v for all v ∈ V , it follows that T2 is the inverse of T1 , thus T2 = T1 1 . 370 18. An arbitrary vector in P1 can be written as p(x) = ap0 (x) + bp1 (x), where p0 (x) = 1, and p1 (x) = x denote the standard basis vectors in P1 . Hence, we can deﬁne an isomorphism T : R2 → P1 by T (a, b) = a + bx. 19. Let S denote the subspace of M2 (R) consisting of all upper triangular matrices. An arbitrary vector in S can be written as ab 10 01 00 =a +b +c . 0c 00 00 01 Therefore, we deﬁne an isomorphism T : R3 → S by ab 0c T (a, b, c) = . 20. Let S denote the subspace of M2 (R) consisting of all skew-symmetric matrices. An arbitrary vector in S can be written as 0a 01 =a . −a 0 −1 0 Therefore, we can deﬁne an isomorphism T : R → S by T (a) = 0a −a 0 . 21. Let S denote the subspace of M2 (R) consisting of all symmetric matrices. An arbitrary vector in S can be written as ab 10 01 00 =a +b +c . bc 00 10 01 Therefore, we can deﬁne an isomorphism T : R3 → S by T (a, b, c) = ab 0 e 22. A typical vector in V takes the form A = 0 0 00 ab bc c f h 0 . d g . Therefore, we can deﬁne T : V → R10 via i j T (A) = (a, b, c, d, e, f, g, h, i, j ). It is routine to verify that T is an invertible linear transformation. Therefore, we have n = 10. 23. A typical vector in V takes the form p = a0 + a2 x2 + a4 x4 + a6 x6 + a8 x8 . Therefore, we can deﬁne T : V → R5 via T (p) = (a0 , a2 , a4 , a6 , a8 ). It is routine to verify that T is an invertible linear transformation. Therefore, we have n = 5. 371 24. We have − T1 1 (x) = A−1 x = 25. We have 11/14 −8/7 1/14 − 5/7 1/7 x. T3 1 (x) = A−1 x = −3/7 3/14 1/7 −1/14 8 −29 3 − 19 −2 x. T4 1 (x) = A−1 x = −5 2 −8 1 28. The matrix of T2 T1 is 1 4 x. 27. We have −2 −2 x. −1/3 −1/6 1/3 2/3 − T2 1 (x) = A−1 x = 26. We have 3 −1 −2 1 −4 −1 2 2 1 2 1 3 = −6 −7 6 8 . The matrix of T1 T2 is 1 2 1 3 −4 −1 2 2 . 351 29. The matrix of T4 T3 is 1 2 1 2 6 7 11 3 351 10 25 23 0 1 2 1 2 1 = 5 14 15 3 5 −1 267 12 19 1 11 3 6 01 2= 4 3 5 −1 23 13 18 8 6 . 43 11 The matrix of T3 T4 is . 30. We have (T2 T1 )(cu1 ) = T2 (T1 (cu1 )) = T2 (cT1 (u1 )) = c(T2 (T1 (u1 ))) = c(T2 T1 )(u1 ), as needed. 31. We ﬁrst prove that if T is one-to-one, then T is onto. The assumption that T is one-to-one implies that dim[Ker(T )] = 0. Hence, by Theorem 5.3.8, dim[W ] = dim[V ] = dim[Rng(T )], which implies that Rng(T ) = W . That is, T is onto. Next we show that if T is onto, then T is one-to-one. We have Rng(T ) = W , so that dim[Rng(T )] = dim[W ] = dim[V ]. Hence, by Theorem 5.3.8, we have dim[Ker(T )] = 0. Hence, Ker(T ) = {0}, which implies that T is one-to-one. 32. If T : V → W is linear, then show that T −1 : W → V is also linear. Let y, z ∈ W, c ∈ R, and T (u) = y and T (v) = z where u, v ∈ V . Then T −1 (y) = u and T −1 (z) = v. Thus, T −1 (y + z) = T −1 (T (u) + T (v)) = T −1 (T (u + v)) = u + v = T −1 (y) + T −1 (z), = 372 and T −1 (cy) = T −1 (cT (u)) = T −1 (T (cu)) = cu = cT −1 (y). Hence, T −1 is a linear transformation. 33. Let T : V → V be a one-to-one linear transformation. Since T is one-to-one, it follows from Theorem 5.4.7 that Ker(T ) = {0}, so dim[Ker(T )] = 0. By Theorem 5.3.8 and substitution, dim[Ker(T )]+ dim[Rng(T )] = dim[V ] =⇒ 0 + dim[Rng(T )] = dim[V ] =⇒ dim[Rng(T )] = dim[V ], and since Rng(T ) is a subspace of V , it follows that Rng(T ) = V , thus T is onto. T −1 exists because T is both one-to-one and onto. 34. To show that {T (v1 ), T (v2 ), . . . , T (vk )} is linearly independent, assume that c1 T (v1 ) + c2 T (v2 ) + · · · + ck T (vk ) = 0. We must show that c1 = c2 = · · · = ck = 0. Using the linearity properties of T , we can write T (c1 v1 + c2 v2 + · · · + ck vk ) = 0. Now, since T is one-to-one, we can conclude that c1 v1 + c2 v2 + · · · + ck vk = 0, and since {v1 , v2 , . . . , vk } is linearly independent, we conclude that c1 = c2 = · · · = ck = 0 as desired. 35. To prove that T is onto, let w be an arbitrary vector in W . We must ﬁnd a vector v in V such that T (v) = w. Since {w1 , w2 , . . . , wm } spans W , we can write w = c1 w1 + c2 w2 + · · · + cm wm for some scalars c1 , c2 , . . . , cm . Therefore T (c1 v1 + c2 v2 + · · · + cm vm ) = c1 T (v1 ) + c2 T (v2 ) + · · · + cm T (vm ) = c1 w1 + c2 w2 + · · · + cm wm = w, which shows that v = c1 v1 + c2 v2 + · · · + cm vm maps under T to w, as desired. 36. Since T is a linear transformation, Rng(T ) is a subspace of W , but dim[W ] = dim[Rng(T )] = n, so W = Rng(T ), which means, by deﬁnition, that T is onto. 37. CORRECTION: ASSUME THAT T1 IS ONE-TO-ONE. Let v be an arbitrary vector in V . Since (T1 T2 )(v) = v for all v in V , we can use T1 (v) instead of v. Thus, (T1 T2 )(T1 (v)) = T1 (v). Since T1 is one-to-one, we conclude that T2 (T1 (v)) = v. That is, (T2 T1 )(v) = v, as required. 38. Suppose that x belongs to Rng(T ). This means that there exists a vector v in V such that T (v) = x. Applying T to both sides of this equation, we have T (T (v)) = T (x), or T 2 (v) = T (x). However, since T 2 = 0, we conclude that T 2 (v) = 0, and hence, T (x) = 0. By deﬁnition, this means that x belongs to Ker(T ). Thus, since every vector in Rng(T ) belongs to Ker(T ), Rng(T ) is a subset of Ker(T ). In addition, Rng(T ) is closed under addition and scalar multiplication, by Theorem 5.3.5, and therefore, Rng(T ) forms a subspace of Ker(T ). 39. (a) To show that T2 T1 : V1 → V3 is one-to-one, we show that Ker(T2 T1 ) = {0}. Suppose that v1 ∈ Ker(T2 T1 ). This means that (T2 T1 )(v1 ) = 0. Hence, T2 (T1 (v1 )) = 0. However, since T2 is one-to-one, we conclude that T1 (v1 ) = 0. Next, since T1 is one-to-one, we conclude that v1 = 0, which shows that the only vector in Ker(T2 T1 ) is 0, as expected. 373 (b) To show that T2 T1 : V1 → V3 is onto, we begin with an arbitrary vector v3 in V3 . Since T2 : V2 → V3 is onto, there exists v2 in V2 such that T2 (v2 ) = v3 . Moreover, since T1 : V1 → V2 is onto, there exists v1 in V1 such that T1 (v1 ) = v2 . Therefore, (T2 T1 )(v1 ) = T2 (T1 (v1 )) = T2 (v2 ) = v3 , and therefore, we have found a vector, namely v1 , in V1 that is mapped under T2 T1 to v3 . Hence, T2 T1 is onto. (c) This follows immediately from parts (a) and (b). Solutions to Section 5.5 True-False Review: 1. FALSE. The matrix representation is an m × n matrix, not an n × m matrix. 2. FALSE. The matrix [T ]B would only make sense if C was a basis for V and B was a basis for W , and C this would only be true if V and W were the same vector space. Of course, in general V and W can be diﬀerent, and so this statement does not hold. 3. FALSE. The correct equation is given in (5.5.2). 4. FALSE. The correct statement is [T ]C B −1 = [T −1 ]B . C 5. TRUE. Many examples are possible. A fairly simple one is the following. Let T1 : R2 → R2 be given by T1 (x, y ) = (x, y ), and let T2 : R2 → R2 be given by T2 (x, y ) = (y, x). Clearly, T1 and T2 are diﬀerent 10 linear transformations. Now if B1 = {(1, 0), (0, 1)} = C1 , then [T1 ]C1 = . If B2 = {(1, 0), (0, 1)} and B1 01 10 . Thus, although T1 = T2 , we found suitable bases B1 , C1 , B2 , C2 C2 = {(0, 1), (1, 0)}, then [T2 ]C2 = B2 01 such that [T1 ]C1 = [T2 ]C2 . B1 B2 6. TRUE. This is the content of part (b) of Corollary 5.5.10. Problems: 1. (a): We must determine T (1), T (x), and T (x2 ), and ﬁnd the components of the resulting vectors in R2 relative to the basis C . We have T (1) = (1, 2), T (x) = (0, 1), T (x2 ) = (−3, −2). Therefore, relative to the standard basis C on R2 , we have [T (1)]C = 1 2 , [T (x)]C = 0 1 , [T (x2 )]C = −3 −2 . Therefore, [T ]C = B 1 2 0 −3 1 −2 . (b): We must determine T (1), T (1 + x), and T (1 + x + x2 ), and ﬁnd the components of the resulting vectors in R2 relative to the basis C . We have T (1) = (1, 2), T (1 + x) = (1, 3), T (1 + x + x2 ) = (−2, 1). 374 Setting T (1) = (1, 2) = c1 (1, −1) + c2 (2, 1) −1 1 and solving, we ﬁnd c1 = −1 and c2 = 1. Thus, [T (1)]C = . Next, setting T (1 + x) = (1, 3) = c1 (1, −1) + c2 (2, 1) −5/3 4/3 and solving, we ﬁnd c1 = −5/3 and c2 = 4/3. Thus, [T (1 + x)]C = . Finally, setting T (1 + x + x2 ) = (−2, 1) = c1 (1, −1) + c2 (2, 1) −4/3 −1/3 into the columns of [T ]C , we obtain B and solving, we ﬁnd c1 = −4/3 and c2 = −1/3. Thus, [T (1 + x + x2 )]C = 2 [T (1)]C , [T (1 + x)]C , and [T (1 + x + x )]C [T ]C = B 5 −1 − 3 4 1 3 −4 3 −1 3 . Putting the results for . 2. (a): We must determine T (E11 ), T (E12 ), T (E21 ), and T (E22 ), and ﬁnd the components of the resulting vectors in P3 relative to the basis C . We have T (E11 ) = 1 − x3 , T (E12 ) = 3x2 , T (E21 ) = x3 , Therefore, relative to the standard basis C on P3 , we have 0 0 1 0 0 0 [T (E11 )]C = 0 , [T (E12 )]C = 3 , [T (E21 )]C = 0 1 0 −1 Putting these results into the columns of [T ]C , we obtain B 10 00 C [T ]B = 03 −1 0 T (E22 ) = −1. , −1 0 [T (E22 )]C = 0 . 0 0 −1 0 0 . 0 0 1 0 (b): The values of T (E21 ), T (E11 ), T (E22 ), and T (E12 ) were determined in part (a). We must express those results in terms of the ordered basis C given in (b). We have 0 0 0 0 0 1 −1 0 [T (E21 )]C = , [T (E11 )]C = 1 −1 , [T (E22 )]C = 0 , [T (E12 )]C = 0 . 0 0 0 3 Putting these results into the columns of [T ]C , we obtain B 0 0 00 0 1 −1 0 C [T ]B = 1 −1 00 0 0 03 . 375 3. (a): We must determine T (1, 0, 0), T (0, 1, 0), and T (0, 0, 1), and ﬁnd the components of the resulting vectors relative to the basis C . We have T (1, 0, 0) = cos x, T (0, 0, 1) = −2 cos x + sin x. T (0, 1, 0) = 3 sin x, Therefore, relative to the basis C , we have [T (1, 0, 0)]C = 1 0 , [T (0, 1, 0)]C = 0 3 , [T (0, 0, 1)]C = −2 1 . Putting these results into the columns of [T ]C , we obtain B [T ]C = B 1 0 0 −2 3 1 . (b): We must determine T (2, −1, −1), T (1, 3, 5), and T (0, 4, −1), and ﬁnd the components of the resulting vectors relative to the basis C . We have T (2, −1, −1) = 4 cos x − 4 sin x, T (1, 3, 5) = −9 cos x + 14 sin x, T (0, 4, −1) = 2 cos x + 11 sin x. Setting 4 cos x − 4 sin x = c1 (cos x − sin x) + c2 (cos x + sin x) and solving, we ﬁnd c1 = 4 and c2 = 0. Therefore [T (2, −1, −1)]C = 4 0 . Next, setting −9 cos x + 14 sin x = c1 (cos x − sin x) + c2 (cos x + sin x) and solving, we ﬁnd c1 = −23/2 and c2 = 5/2. Therefore, [T (1, 3, 5)]C = −23/2 5/2 . Finally, setting 2 cos x + 11 sin x = c1 (cos x − sin x) + c2 (cos x + sin x) and solving, we ﬁnd c1 = −9/2 and c2 = 13/2. Therefore [T (0, 4, −1)]C = into the columns of [T ]C , we obtain B [T ]C = B 4 − 23 2 5 0 2 −9 2 13 2 −9/2 13/2 . Putting these results . 4. (a): We must determine T (1), T (x), and T (x2 ), and ﬁnd the components of the resulting vectors relative to the standard basis C on P3 . We have T (1) = 1 + x, T (x) = x + x2 , T (x2 ) = x2 + x3 . Therefore, 1 1 [T (1)]C = , 0 0 0 1 [T (x)]C = , 1 0 0 0 [T (x2 )]C = . 1 1 376 Putting these results into the columns of [T ]C , we obtain B 10 1 1 [T ]C = B 0 1 00 0 0 . 1 1 (b): We must determine T (1), T (x − 1), and T ((x − 1)2 ), and ﬁnd the components of the resulting vectors relative to the given basis C on P3 . We have T (1) = 1 + x, T (x − 1) = −1 + x2 , T ((x − 1)2 ) = 1 − 2x − x2 + x3 . Setting 1 + x = c1 (1) + c2 (x − 1) + c3 (x − 1)2 + c4 (x − 1)3 2 1 and solving, we ﬁnd c1 = 2, c2 = 1, c3 = 0, and c4 = 0. Therefore [T (1)]C = . Next, setting 0 0 −1 + x2 = c1 (1) + c2 (x − 1) + c3 (x − 1)2 + c4 (x − 1)3 0 2 and solving, we ﬁnd c1 = 0, c2 = 2, c3 = 1, and c4 = 0. Therefore, [T (x − 1)]C = . Finally, setting 1 0 1 − 2x − x2 + x3 = c1 (1) + c2 (x − 1) + c3 (x − 1)2 + c4 (x − 1)3 −1 −1 and solving, we ﬁnd c1 = −1, c2 = −1, c3 = 2, and c4 = 1. Therefore, [T ((x − 1)2 )]C = 2 . Putting 1 these results into the columns of [T ]C , we obtain B 2 0 −1 1 2 −1 . [T ]C = B 0 1 2 00 1 5. (a): We must determine T (1), T (x), T (x2 ), and T (x3 ), and ﬁnd the components of the resulting vectors relative to the standard basis C on P2 . We have T (1) = 0, T (x) = 1, T (x2 ) = 2x, T (x3 ) = 3x2 . Therefore, if C is the standard basis on P2 , then we have 0 1 0 [T (1)]C = 0 , [T (x)]C = 0 , [T (x2 )]C = 2 , 0 0 0 0 [T (x3 )]C = 0 . 3 377 Putting these results into the columns of [T ]C , we obtain B 010 [T ]C = 0 0 2 B 000 0 0 . 3 (b): We must determine T (x3 ), T (x3 + 1), T (x3 + x), and T (x3 + x2 ), and ﬁnd the components of the resulting vectors relative to the given basis C on P2 . We have T (x3 ) = 3x2 , T (x3 + 1) = 3x2 , T (x3 + x) = 3x2 + 1, T (x3 + x2 ) = 3x2 + 2x. Setting 3x2 = c1 (1) + c2 (1 + x) + c3 (1 + x + x2 ) 0 and solving, we ﬁnd c1 = 0, c2 = −3, and c3 = 3. Therefore [T (x3 )]C = −3 . Likewise, [T (x3 + 1)]C = 3 0 −3 . Next, setting 3 3x2 + 1 = c1 (1) + c2 (1 + x) + c3 (1 + x + x2 ) 1 and solving, we ﬁnd c1 = 1, c2 = −3, and c3 = 3. Therefore, [T (x3 + x)]C = −3 . Finally, setting 3 3x2 + 2x = c1 (1) + c2 (1 + x) + c3 (1 + x + x2 ) and solving, we ﬁnd c1 = −2, c2 = −1, and c3 = 3. Therefore, [T (x3 + 2x)]C −2 = −1 . Putting these 3 results into the columns of [T ]C , we obtain B 0 0 1 −2 [T ]C = −3 −3 −3 −1 . B 3 3 3 3 6. (a): We must determine T (E11 ), T (E12 ), T (E21 ), and T (E22 ), and ﬁnd the components of the resulting vectors relative to the standard basis C on R2 . We have T (E11 ) = (1, 1), T (E12 ) = (0, 0), T (E21 ) = (0, 0), T (E22 ) = (1, 1). Therefore, since C is the standard basis on R2 , we have [T (E11 )]C = 1 1 , [T (E12 )]C = 0 0 , [T (E21 )]C = Putting these results into the columns of [T ]C , we obtain B [T ]C = B 1 1 0 0 0 0 1 1 . 0 0 , [T (E22 )]C = 1 1 . 378 (b): Let us denote the four matrices in the ordered basis for B as A1 , A2 , A3 , and A4 , respectively. We must determine T (A1 ), T (A2 ), T (A3 ), and T (A4 ), and ﬁnd the components of the resulting vectors relative to the standard basis C on R2 . We have T (A1 ) = (−4, −4), T (A2 ) = (3, 3), T (A3 ) = (−2, −2), T (A4 ) = (0, 0). Therefore, since C is the standard basis on R2 , we have [T (A1 )]C = −4 −4 , [T (A2 )]C = 3 3 , −2 −2 [T (A3 )]C = , 0 0 [T (A4 )]C = . Putting these results into the columns of [T ]C , we obtain B −4 −4 [T ]C = B 3 −2 3 −2 0 0 . 7. (a): We must determine T (E11 ), T (E12 ), T (E21 ), and T (E22 ), and ﬁnd the components of the resulting vectors relative to the standard basis C on M2 (R). We have T (E12 ) = 2E12 − E21 , T (E21 ) = 2E21 − E12 , T (E22 ) = E22 . 0 2 [T (E12 )]C = −1 , 0 0 −1 [T (E21 )]C = 2 , 0 0 0 [T (E22 )]C = . 0 1 T (E11 ) = E11 , Therefore, we have 1 0 [T (E11 )]C = , 0 0 Putting these results into the columns of [T ]C , we obtain B 1 0 00 0 2 −1 0 [T ]C = B 0 −1 20 0 0 01 . (b): Let us denote the four matrices in the ordered basis for B as A1 , A2 , A3 , and A4 , respectively. We must determine T (A1 ), T (A2 ), T (A3 ), and T (A4 ), and ﬁnd the components of the resulting vectors relative to the standard basis C on M2 (R). We have T (A1 ) = −1 −2 −2 −3 , T (A2 ) = 1 3 0 2 , T (A3 ) = 0 −8 7 −2 , T (A4 ) = 0 −2 7 0 Therefore, we have −1 −2 [T (A1 )]C = −2 , −3 1 0 [T (A2 )]C = , 3 2 0 −8 [T (A3 )]C = 7 , −2 0 7 [T (A4 )]C = −2 . 0 . 379 Putting these results into the columns of [T ]C , we obtain B −1 1 0 0 −2 0 −8 7 C [T ]B = −2 3 7 −2 −3 2 −2 0 . 8. (a): We must determine T (e2x ) and T (e−3x ), and ﬁnd the components of the resulting vectors relative to the given basis C . We have T (e2x ) = 2e2x and T (e−3x ) = −3e−3x . Therefore, we have 2 0 0 −3 [T ]C = B . (b): We must determine T (e2x − 3e−3x ) and T (2e−3x ), and ﬁnd the components of the resulting vectors relative to the given basis C . We have T (e2x − 3e−3x ) = 2e2x + 9e−3x T (2e−3x ) = −6e−3x . and Now, setting 2e2x + 9e−3x = c1 (e2x + e−3x ) + c2 (−e2x ) and solving, we ﬁnd c1 = 9 and c2 = 7. Therefore, [T (e2x − 3e−3x )]C = 9 7 . Finally, setting −6e−3x = c1 (e2x + e−3x ) + c2 (−e2x ) and solving, we ﬁnd c1 = c2 = −6. Therefore, [T (2e−3x )]C = −6 −6 . Putting these results into the columns of [T ]C , we obtain B 9 −6 7 −6 [T ]C = B . 9. (a): Let us ﬁrst compute [T ]C . We must determine T (1) and T (x), and ﬁnd the components of the resulting B vectors relative to the standard basis C = {E11 , E12 , E21 , E22 } on M2 (R). We have T (1) = Therefore 1 0 0 −1 and 1 0 [T (1)]C = 0 −1 T (x) = −1 −2 0 1 −1 0 and [T (x)]C = −2 . 1 Hence, 1 −1 0 0 [T ]C = B 0 −2 . −1 1 . 380 Now, −2 3 [p(x)]B = [−2 + 3x]B = . Therefore, we have 1 −1 0 0 [T (p(x))]C = [T ]C [p(x)]B = B 0 −2 −1 1 −5 0 = −6 . 5 −2 3 Thus, T (p(x)) = −5 −6 0 5 . (b): We have −5 −6 T (p(x)) = T (−2 + 3x) = 0 5 . 10. (a): Let us ﬁrst compute [T ]C . We must determine T (1, 0, 0), T (0, 1, 0), and T (0, 0, 1), and ﬁnd the compoB nents of the resulting vectors relative to the standard basis C = {1, x, x2 , x3 } on P3 . We have T (1, 0, 0) = 2 − x − x3 , T (0, 1, 0) = −x, T (0, 0, 1) = x + 2x3 . Therefore, 2 −1 [T (1, 0, 0)]C = 0 , −1 0 −1 [T (0, 1, 0)]C = 0 , 0 0 1 [T (0, 0, 1)]C = . 0 2 Putting these results into the columns of [T ]C , we obtain B 2 00 −1 −1 1 . [T ]C = B 0 0 0 −1 02 Now, 2 [v]B = [(2, −1, 5)]B = −1 , 5 and therefore, 2 00 4 2 −1 −1 1 4 C −1 = [T (v)]C = [T ]B [v]B = 0 0 0 0 5 −1 02 8 Therefore, T (v) = 4 + 4x + 8x3 . . 381 (b): We have T (v) = T (2, −1, 5) = 4 + 4x + 8x3 . 11. (a): Let us ﬁrst compute [T ]C . We must determine T (E11 ), T (E12 ), T (E21 ), and T (E22 ), and ﬁnd the B components of the resulting vectors relative to the standard basis C = {E11 , E12 , E21 , E22 }. We have T (E11 ) = 2 −1 0 −1 , T (E12 ) = −1 0 0 −1 , T (E21 ) = 0 0 0 3 , T (E22 ) = 1 0 3 0 . Therefore, 2 −1 [T (E11 )]C = 0 , −1 −1 0 [T (E12 )]C = 0 , −1 0 0 [T (E21 )]C = , 0 3 1 3 [T (E22 )]C = . 0 0 Putting these results into the columns of [T ]C , we obtain B 2 −1 0 1 −1 0 0 3 . [T ]C = B 0 0 0 0 −1 −1 3 0 Now, −7 2 [A]B = 1 , −3 and therefore, −7 2 −1 0 1 −1 0 0 3 2 C [T (A)]C = [T ]B [A]B = 0 0 0 0 1 −1 −1 3 0 −3 −19 −2 . = 0 8 Hence, T (A) = −19 −2 0 8 . T (A) = −19 −2 0 8 . (b): We have 12. (a): Let us ﬁrst compute [T ]C . We must determine T (1), T (x), and T (x2 ), and ﬁnd the components of the B resulting vectors relative to the standard basis C = {1, x, x2 , x3 , x4 }. We have T (1) = x2 , T (x) = x3 , T (x2 ) = x4 . 382 Therefore, [T (1)]C = 0 0 1 0 0 0 0 0 1 0 [T (x)]C = , [T (x )]C = , 2 0 0 0 0 1 . Putting these results into the columns of [T ]C , we obtain B [T ]C B = 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 . Now, −1 [p(x)]B = [−1 + 5x − 6x2 ]B = 5 , −6 and therefore, [T (p(x))]C = [T ]C [p(x)]B = B 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 −1 5 = −6 0 0 −1 5 −6 . Hence, T (p(x)) = −x2 + 5x3 − 6x4 . (b): We have T (p(x)) = −x2 + 5x3 − 6x4 . 13. (a): Let us ﬁrst compute [T ]C . We must determine T (Eij ) for 1 ≤ i, j ≤ 3, and ﬁnd components of the B resulting vectors relative to the standard basis C = {1}. We have T (E11 ) = T (E22 ) = T (E33 ) = 1 and T (E12 ) = T (E13 ) = T (E23 ) = T (E21 ) = T (E31 ) = T (E32 ) = 0. Therefore [T (E11 )]C = [T (E22 ]C = [T (E33 ]C = [1] and all other component vectors are [0]. Putting these results into the columns of [T ]C , we obtain B [T ]C = B 1 0 0 0 1 0 0 0 1 . 383 Now, 2 −6 0 1 4 −4 0 0 −3 [A]B = . and therefore, [T (A)]C = [T ]C [A]B = B 1 0 0 0 1 0 0 0 1 2 −6 0 1 4 −4 0 0 −3 = [3]. Hence, T (A) = 3. (b): We have T (A) = 3. 14. (a): Let us ﬁrst compute [T ]C . We must determine T (1), T (x), T (x2 ), T (x3 ), and T (x4 ), and ﬁnd the B components of the resulting vectors relative to the standard basis C = {1, x, x2 , x3 }. We have T (1) = 0, T (x) = 1, T (x2 ) = 2x, T (x3 ) = 3x2 , T (x4 ) = 4x3 . Therefore, 0 0 [T (1)]C = , 0 0 1 0 [T (x)]C = , 0 0 0 2 [T (x2 )]C = , 0 0 Putting these results into the columns of [T ]C , we obtain B 010 0 0 2 [T ]C = B 0 0 0 000 0 0 3 0 0 0 [T (x3 )]C = , 3 0 0 0 . 0 4 Now, [p(x)]B = [3 − 4x + 6x + 6x − 2x ]B = 2 3 4 3 −4 6 6 −2 , 0 0 [T (x4 )]C = . 0 4 384 and therefore, 0 0 [T (p(x))]C = [T ]C [p(x)]B = B 0 0 1 0 0 0 0 2 0 0 3 −4 0 −4 0 6 = 12 18 0 6 −8 4 −2 0 0 3 0 . Therefore, T (p(x)) = T (3 − 4x + 6x2 + 6x3 − 2x4 ) = −4 + 12x + 18x2 − 8x3 . (b): We have T (p(x)) = p (x) = −4 + 12x + 18x2 − 8x3 . 15. (a): Let us ﬁrst compute [T ]C . We must determine T (1), T (x), T (x2 ), and T (x3 ), and ﬁnd the components B of the resulting vectors relative to the standard basis C = {1}. We have T (1) = 1, T (x) = 2, T (x2 ) = 4, T (x3 ) = 8. Therefore, [T (1)]C = [1], [T (x)]C = [2], [T (x2 )]C = [4], [T (x3 )]C = [8]. Putting these results into the columns of [T ]C , we obtain B [T ]C = B 1 2 4 8 . Now, 0 2 [p(x)]B = [2x − 3x2 ]B = −3 , 0 and therefore, [T (p(x))]C = [T ]C [p(x)]B = B 1 2 4 8 0 2 −3 = [−8]. 0 Therefore, T (p(x)) = −8. (b): We have p(2) = 2 · 2 − 3 · 22 = −8. 16. The linear transformation T2 T1 : P4 → R is given by (T2 T1 )(p(x)) = p (2). Let A denote the standard basis on P4 , let B denote the standard basis on P3 , and let C denote the standard basis on R. (a): To determine [T2 T1 ]C , we compute A (T2 T1 )(1) = 0, (T2 T1 )(x) = 1, (T2 T1 )(x2 ) = 4, (T2 T1 )(x3 ) = 12, (T2 T1 )(x4 ) = 32. 385 Therefore [(T2 T1 )(1)]C = [0], [(T2 T1 )(x2 )]C = [4], [(T2 T1 )(x)]C = [1], [(T2 T1 )(x3 )]C = [12], [(T2 T1 )(x4 )]C = [32]. Putting these results into the columns of [T2 T1 ]C , we obtain A [T2 T1 ]C = A 0 1 4 12 32 . 0 2 0 0 0 0 3 0 0 0 = 0 4 0 1 (b): We have [T2 ]C [T1 ]B = B A 1 2 4 0 0 0 0 8 1 0 0 0 4 12 (c): Let px) = 2 + 5x − x2 + 3x4 . Then the component vector of p(x) relative ( 2 5 [p(x)]A = −1 . Thus, 0 3 2 5 [(T2 T1 )(p(x))]C = [T2 T1 ]C [p(x)]A = 0 1 4 12 32 −1 A 0 3 32 = [T2 T1 ]C . A to the standard basis A is = [97]. Therefore, (T2 T1 )(2 + 5x − x2 + 3x4 ) = 97. Of course, p (2) = 5 − 2 · 2 + 12 · 23 = 97 by direct calculation as well. 17. The linear transformation T2 T1 : P1 → R2 is given by (T2 T1 )(a + bx) = (0, 0). Let A denote the standard basis on P1 , let B denote the standard basis on M2 (R), and let C denote the standard basis on R2 . (a): To determine [T2 T1 ]C , we compute A (T2 T1 )(1) = (0, 0) and (T2 T1 )(x) = (0, 0). Therefore, we obtain [T2 T1 ]C = 02 . A (b): We have [T2 ]C [T1 ]B = B A 1 1 0 0 0 0 1 1 1 −1 0 0 C 0 −2 = 02 = [T2 T1 ]A . −1 1 386 (c): The component vector of p(x) = −3 + 8x relative to the standard basis A is [p(x)]A = [(T2 T1 )(p(x))]C = [T2 T1 ]C [p(x)]A = 02 [p(x)]A = A 0 0 −3 8 . Thus, . Therefore, (T2 T1 )(−3 + 8x) = (0, 0). Of course −11 −16 (T2 T1 )(−3 + 8x) = T1 0 11 = (0, 0) by direct calculation as well. 18. The linear transformation T2 T1 : P2 → P2 is given by (T2 T1 )(p(x)) = [(x + 1)p(x)] . Let A denote the standard basis on P2 , let B denote the standard basis on P3 , and let C denote the standard basis on P2 . (a): To determine [T2 T1 ]C , we compute as follows: A (T2 T1 )(x) = 1 + 2x, (T2 T1 )(1) = 1, (T2 T1 )(x2 ) = 2x + 3x2 . Therefore, 1 [(T2 T1 )(1)]C = 0 , 0 1 [(T2 T1 )(x)]C = 2 , 0 0 [(T2 T1 )(x2 )]C = 2 . 3 Putting these results into the columns of [T2 T1 ]C , we obtain A 110 [T2 T1 ]C = 0 2 2 . A 003 (b): We have [T2 ]C [T1 ]B B A 0 = 0 0 1 0 0 0 2 0 1 0 1 0 0 3 0 0 1 1 0 0 1 0 = 0 1 0 1 1 2 0 0 2 = [T2 T1 ]C . A 3 (c): The component vector of p(x) = 7 − x + 2x2 relative to the standard basis A is [p(x)]A Thus, 1 [(T2 T1 )(p(x))]C = [T2 T1 ]C [p(x)]A = 0 A 0 1 2 0 0 7 6 2 −1 = 2 . 3 2 6 Therefore, (T2 T1 )(7 − x + 2x2 ) = 6 + 2x + 6x2 . 7 = −1 . 2 387 Of course (T2 T1 )(7 − x + 2x2 ) = T2 ((x + 1)(7 − x + 2x2 )) = T2 (7 + 6x + x2 + 2x3 ) = 6 + 2x + 6x2 by direct calculation as well. (d): YES. Since the matrix [T2 T1 ]C computed in part (a) is invertible, T2 T1 is invertible. A 19. NO. The matrices [T ]C obtained in Problem 2 are not invertible (they contain rows of zeros), and B therefore, the corresponding linear transformation T is not invertible. 20. YES. We can explain this answer by using a matrix representation of T . Let B = {1, x, x2 } and let C = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}. Then T (1) = (1, 1, 1), T (x) = (0, 1, 2), and T (x2 ) = (0, 1, 4), and so 100 [T ]C = 1 1 1 . B 124 Since this matrix is invertible, the corresponding linear transformation T is invertible. 21. Note that w belongs to Rng(T ) ⇐⇒ w = T (v) for some v in V ⇐⇒ [w]C = [T (v)]C for some v in V ⇐⇒ [w]C = [T ]C [v]B for some v in V . B The right-hand side of this last expression can be expressed as a linear combination of the columns of [T ]C , B and therefore, w belongs to Rng(T ) if and only if [w]C can be expressed as a linear combination of the columns of [T ]C . That is, if and only if [w]C belongs to colspace([T ]C ). B B Solutions to Section 5.6 True-False Review: 1. FALSE. If v = 0, then Av = λv = 0, but by deﬁnition, an eigenvector must be a nonzero vector. 2. TRUE. When we compute det(A − λI ) for an upper or lower triangular matrix A, the determinant is the product of the entries lying along the main diagonal of A − λI : det(A − λI ) = (a11 − λ)(a22 − λ) . . . (ann − λ). The roots of this characteristic equation are precisely the values a11 , a22 , . . . , ann along the main diagonal of the matrix A. 3. TRUE. The eigenvalues of a matrix are precisely the set of roots of its characteristic equation. Therefore, two matrices A and B that have the same characteristic equation will have the same eigenvalues. 00 01 and B = . 00 00 We have det(A − λI ) = (−λ)2 = λ2 = det(B − λI ). Note that every nonzero vector in R2 is an eigenvector a with a = 0 are eigenvectors of B . of A corresponding to λ = 0. However, only vectors of the form 0 Therefore, A and B do not have precisely the same set of eigenvectors. In this case, every eigenvector of B is also an eigenvector of A, but not conversely. 4. FALSE. Many examples of this can be found. As a simple one, consider A = 388 5. TRUE. Geometrically, all nonzero points v = (x, y ) in R2 are oriented in a diﬀerent direction from the origin after a 90◦ rotation than they are initially. Therefore, the vectors v and Av are not parallel. 6. TRUE. The characteristic equation of an n × n matrix A, det(A − λI ) is a polynomial of degree n i the indeterminate λ. Since such a polynomial always possesses n roots (with possible repeated or complex roots) by the Fundamental Theorem of Algebra, the statement is true. 7. FALSE. This is not true, in general, when the linear combination formed involves eigenvectors cor10 responding to diﬀerent eigenvalues. For example, let A = , with eigenvalues λ = 1 and λ = 2. 02 1 It is easy to see that corresponding eigenvectors to these eigenvalues are, respectively, v1 = and 0 0 v2 = . However, note that 1 1 A(v1 + bf v 2 ) = , 2 which is not of the form λ(v1 + v2 ), and therefore v1 + v2 is not an eigenvector of A. As a more trivial illustration, note that if v is an eigenvector of A, then 0v is a linear combination of {v} that is no longer an eigenvector of A. 8. TRUE. This is basically a fact about roots of polynomials. Complex roots of real polynomials always occur in complex conjugate pairs. Therefore, if λ = a + ib (b = 0) is an eigenvalue of A, then so is λ = a − ib. 9. TRUE. If λ is an eigenvalue of A, then we have Av = λv for some eigenvector v of A corresponding to λ. Then A2 v = A(Av) = A(λv) = λ(Av) = λ(λv) = λ2 v, which shows that v is also an eigenvector of A2 , this time corresponding to the eigenvalue λ2 . Problems: 1 2 1. Av = 3 2 1 1 = 4 4 =4 1 1 = λv . 1 −2 −6 2 6 2 2 −5 1 = 3 = 3 1 = λv. 2. Av = −2 2 1 8 −1 −3 −1 1 4 c1 + 4c2 3. Since v = c1 0 + c2 −3 = −3c2 , it follows that −3 0 −3c1 14 1 c1 + 4c2 −2c1 − 8c2 c1 + 4c2 = −2 −3c2 = λv. 1 −3c2 = 6c2 Av = 3 2 3 4 −1 −3c1 6c1 −3c1 41 1 1 2 λ1 = λ1 =⇒ = =⇒ λ1 = 2. 23 −2 −2 −4 −2λ1 41 1 1 5 λ2 Av2 = λ2 v2 =⇒ = λ2 =⇒ = =⇒ λ2 = 5. 23 1 1 5 λ2 Thus λ1 = 2 corresponds to v1 and λ2 = 5 corresponds to v2 . 4. Av1 = λ1 v1 =⇒ 5. The only vectors that are mapped into a scalar multiple of themselves under a reﬂection in the x-axis are those vectors that either point along the x-axis, or that point along the y -axis. Hence, the eigenvectors 389 are of the form (a, 0) or (0, b) where a and b are arbitrary nonzero real numbers. A vector that points along the x-axis will have neither its magnitude nor its direction altered by a reﬂection in the x-axis. Hence, the eigenvectors of the form (a, 0) correspond to the eigenvalue λ = 1. A vector of the form (0, b) will be mapped into the vector (0, −b) = −1(0, b) under a reﬂection in the x-axis. Consequently, the eigenvectors of the form (0, b) correspond to the eigenvalue λ = −1. 6. Any vectors lying along the line y = x are unmoved by the action of T , and hence, any vector (t, t) with t = 0 is an eigenvector with corresponding eigenvalue λ = 1. On the other hand, any vector lying along the line y = −x will be reﬂected across the line y = x, thereby experiencing a 180◦ change of direction. Therefore, any vector (t, −t) with t = 0 is an eigenvector with corresponding eigenvalue λ = −1. All vectors that do not lie on the line y = x or the line y = −x are not eigenvectors of this linear transformation. 7. If θ = 0, π , there are no vectors that are mapped into scalar multiples of themselves under the rotation, and consequently, there are no real eigenvalues and eigenvectors in this case. If θ = 0, then every vector is mapped onto itself under the rotation, therefore λ = 1, and every nonzero vector in R2 is an eigenvector. If θ = π , then every vector is mapped onto its negative under the rotation, therefore λ = −1, and once again, every nonzero vector in R2 is an eigenvector. 8. Any vectors lying on the y -axis are unmoved by the action of T , and hence, any vector (0, y, 0) with y not zero is an eigenvector of T with corresponding eigenvalue λ = 1. On the other hand, any vector lying in the xz -plane, say (x, 0, z ) is transformed under T to (0, 0, 0). Thus, any vector (x, 0, z ) with x and z not both zero is an eigenvector with corresponding eigenvalue λ = 0. 3−λ −1 = 0 ⇐⇒ λ2 − 2λ − 8 = 0 −5 −1 − λ ⇐⇒ (λ + 2)(λ − 4) = 0 ⇐⇒ λ = −2 or λ = 4. 5 −1 v1 0 If λ = −2 then (A − λI )v = 0 assumes the form = −5 1 v2 0 =⇒ 5v1 − v2 = 0 =⇒ v2 = 5v1 . If we let v1 = t ∈ R, then the solution set of this system is {(t, 5t) : t ∈ R} so the eigenvectors corresponding to λ = −2 are v = t(1, 5) where t ∈ R. −1 −1 v1 0 If λ = 4 then (A − λI )v = 0 assumes the form = −5 −5 v2 0 =⇒ −v1 − v2 = 0 =⇒ v2 = −v1 . If we let v2 = r ∈ R, then the solution set of this system is {(r, −r) : r ∈ R} so the eigenvectors corresponding to λ = 4 are v = r(1, −1) where r ∈ R. 9. det(A − λI ) = 0 ⇐⇒ 1−λ 6 = 0 ⇐⇒ λ2 + 2λ − 15 = 0 2 −3 − λ ⇐⇒ (λ − 3)(λ + 5) = 0 ⇐⇒ λ = 3 or λ = −5. −2 6 v1 0 = If λ = 3 then (A − λI )v = 0 assumes the form 2 −6 v2 0 =⇒ v1 − 3v2 = 0. If we let v2 = r ∈ R, then the solution set of this system is {(3r, r) : r ∈ R} so the eigenvectors corresponding to λ = 3 are v = r(3, 1) where r ∈ R. 66 v1 0 If λ = −5 then (A − λI )v = 0 assumes the form = 22 v2 0 =⇒ v1 + v2 = 0. If we let v2 = s ∈ R, then the solution set of this system is {(−s, s) : s ∈ R} so the eigenvectors corresponding to λ = −5 are v = s(−1, 1) where s ∈ R. 10. det(A − λI ) = 0 ⇐⇒ 7−λ 4 = 0 ⇐⇒ λ2 − 10λ + 25 = 0 −1 3−λ ⇐⇒ (λ − 5)2 = 0 ⇐⇒ λ = 5 of multiplicity two. 11. det(A − λI ) = 0 ⇐⇒ 390 2 4 v1 0 = −1 −2 v2 0 =⇒ v1 + 2v2 = 0 =⇒ v1 = −2v2 . If we let v2 = t ∈ R, then the solution set of this system is {(−2t, t) : t ∈ R} so the eigenvectors corresponding to λ = 5 are v = t(−2, 1) where t ∈ R. If λ = 5 then (A − λI )v = 0 assumes the form 12. det(A − λI ) = 0 ⇐⇒ 2−λ 0 0 2−λ = 0 ⇐⇒ (2 − λ)2 = 0 ⇐⇒ λ = 2 of multiplicity two. 00 v1 0 = 00 v2 0 Thus, if we let v1 = s and v2 = t where s, t ∈ R, then the solution set of this system is {(s, t) : s, t ∈ R} so the eigenvectors corresponding to λ = 2 are v = s(1, 0) + t(0, 1) where s, t ∈ R. If λ = 2 then (A − λI )v = 0 assumes the form 13. det(A − λI ) = 0 ⇐⇒ 3−λ 4 −2 −1 − λ = 0 ⇐⇒ λ2 − 2λ + 5 = 0 ⇐⇒ λ = 1 ± 2i. 2 + 2i −2 v1 0 = 4 −2 + 2i v2 0 =⇒ (1 + i)v1 − v2 = 0. If we let v1 = s ∈ C, then the solution set of this system is {(s, (1 + i)s) : s ∈ C} so the eigenvectors corresponding to λ = 1 − 2i are v = s(1, 1 + i) where s ∈ C. By Theorem 5.6.8, since the entries of A are real, λ = 1 + 2i has corresponding eigenvectors of the form v = t(1, 1 − i) where t ∈ C. If λ = 1 − 2i then (A − λI )v = 0 assumes the form 14. det(A − λI ) = 0 ⇐⇒ 2−λ −3 3 2−λ = 0 ⇐⇒ (2 − λ)2 = −9 ⇐⇒ λ = 2 ± 3i. 3i 3 v1 0 = −3 3i v2 0 =⇒ v2 = −iv1 . If we let v1 = t ∈ C, then the solution set of this system is {(t, −it) : t ∈ C} so the eigenvectors corresponding to λ = 2 − 3i are v = t(1, −i) where t ∈ C. By Theorem 5.6.8, since the entries of A are real, λ = 2 + 3i has corresponding eigenvectors of the form v = r(1, i) where r ∈ C. If λ = 2 − 3i then (A − λI )v = 0 assumes the form 15. det(A − λI ) = 0 ⇐⇒ 10 − λ 0 −8 −12 2−λ 12 8 0 −6 λ − = 0 ⇐⇒ (λ − 2)3 = 0 ⇐⇒ λ = 2 of multiplicity three. 8 −12 8 v1 0 0 0 v2 = 0 If λ = 2 then (A − λI )v = 0 assumes the form 0 −8 12 −8 v3 0 =⇒ 2v1 − 3v2 + 2v3 = 0. Thus, if we let v2 = 2s and v3 = t where s, t ∈ R, then the solution set of this system is {(3s − t, 2s, t) : s, t ∈ R} so the eigenvectors corresponding to λ = 2 are v = s(3, 2, 0) + t(−1, 0, 1) where s, t ∈ R. 3−λ 0 1 0 −1 = 0 ⇐⇒ (λ − 1)(λ − 3)2 = 0 ⇐⇒ λ = 1 or λ = 3 of 2−λ multiplicity two. 2 0 0 v1 0 1 −1 v2 = 0 If λ = 1 then (A − λI )v = 0 assumes the form 0 1 −1 1 v3 0 =⇒ v1 = 0 and v2 − v3 = 0. Thus, if we let v3 = s where s ∈ R, then the solution set of this system is {(0, s, s) : s ∈ R} so the eigenvectors corresponding to λ = 1 are = s(0, 1, where s ∈ R. v 1) 0 0 0 v1 0 If λ = 3 then (A − λI )v = 0 assumes the form 0 −1 −1 v2 = 0 1 −1 −1 v3 0 =⇒ v1 = 0 and v2 + v3 = 0. Thus, if we let v3 = t where t ∈ R, then the solution set of this system is 16. det(A − λI ) = 0 ⇐⇒ 0 2−λ −1 391 {(0, −t, t) : t ∈ R} so the eigenvectors corresponding to λ = 3 are v = t(0, −1, 1) where t ∈ R. 17. det(A − λI ) = 0 ⇐⇒ 1−λ 0 2 0 3−λ −2 18. det(A − λI ) = 0 ⇐⇒ 6−λ −5 0 3 −2 − λ 0 0 2 = 0 ⇐⇒ (λ − 1)3 = 0 ⇐⇒ λ = 1 of multiplicity three. −1 − λ 0 0 0 v1 0 2 2 v2 = 0 If λ = 1 then (A − λI )v = 0 assumes the form 0 2 −2 −2 v3 0 =⇒ v1 = 0 and v2 + v3 = 0. Thus, if we let v3 = s where s ∈ R, then the solution set of this system is {(0, −s, s) : s ∈ R} so the eigenvectors corresponding to λ = 1 are v = s(0, −1, 1) where s ∈ R. −4 2 −1 − λ or λ = 3. = 0 ⇐⇒ (λ − 1)(λ + 1)(λ − 3) = 0 ⇐⇒ λ = −1, λ = 1, v1 7 3 −4 0 2 v2 = 0 If λ = −1 then (A − λI )v = 0 assumes the form −5 −1 0 0 0 v3 0 =⇒ v1 = v3 − v2 and 4v2 − 3v3 = 0. Thus, if we let v3 = 4r where r ∈ R, then the solution set of this system is {(r, 3r, 4r) : r ∈ R} so the eigenvectors corresponding to λ = −1 v r(1, 3, 4) where r ∈ R. are = v1 5 3 −4 0 2 v2 = 0 If λ = 1 then (A − λI )v = 0 assumes the form −5 −3 0 0 −2 v3 0 =⇒ 5v1 + 3v2 = 0 and v3 = 0. Thus, if we let v2 = −5s where s ∈ R, then the solution set of this system is {(3s, −5s, 0) : s ∈ R} so the eigenvectors corresponding to λ = 1 v = (3, 5, 0) where s ∈ R. are s− v1 0 3 3 −4 2 v2 = 0 If λ = 3 then (A − λI )v = 0 assumes the form −5 −5 0 0 −4 v3 0 =⇒ v1 + v2 = 0 and v3 = 0. Thus, if we let v2 = t where t ∈ R, then the solution set of this system is {(−t, t, 0) : t ∈ R} so the eigenvectors corresponding to λ = 3 are v = t(−1, 1, 0) where t ∈ R. 7−λ 8 0 −8 −9 − λ 0 6 6 = 0 ⇐⇒ (λ + 1)3 = 0 ⇐⇒ λ = −1 of multiplicity −1 − λ three. 8 −8 6 v1 0 If λ = −1 then (A − λI )v = 0 assumes the form 8 −8 6 v2 = 0 0 00 v3 0 =⇒ 4v1 − 4v2 + 3v3 = 0. Thus, if we let v2 = r and v3 = 4s where r, s ∈ R, then the solution set of this system is {(r − 3s, r, 4s) : r, s ∈ R} so the eigenvectors corresponding to λ = −1 are v = r(1, 1, 0)+ s(−3, 0, 4) where r, s ∈ R. 19. det(A − λI ) = 0 ⇐⇒ 20. det(A − λI ) = 0 ⇐⇒ multiplicity two. −λ 1 0 2−λ 2 −1 −1 0 3−λ = 0 ⇐⇒ (λ − 1)(λ − 2)2 = 0 ⇐⇒ λ = 1 or λ = 2 of −1 1 −1 v1 0 1 0 v2 = 0 If λ = 1 then (A − λI )v = 0 assumes the form 0 2 −1 2 v3 0 =⇒ v2 = 0 and v1 + v3 = 0. Thus, if we let v3 = r where r ∈ R, then the solution set of this system is {(−r, 0, r) : r ∈ R} so the eigenvectors corresponding to λ = 1 are v = r(−1, 0, 1) where r ∈ R. 392 −2 1 −1 v1 0 0 v2 If λ = 2 then (A − λI )v = 0 assumes the form 0 2 −1 1 v3 =⇒ 2v1 − v2 + v3 = 0. Thus, if we let v1 = s and v3 = t where s, t ∈ R, is {(s, 2s + t, t) : s, t ∈ R} so the eigenvectors corresponding to λ = 2 s, t ∈ R. 21. det(A − λI ) = 0 ⇐⇒ 1−λ 0 0 0 −λ −1 0 1 −λ 0 = 0 0 then the solution set of this system are v = s(1, 2, 0) + t(0, 1, 1) where = 0 ⇐⇒ (1 − λ)(1 + λ2 ) = 0 ⇐⇒ λ = 1 or λ = ±i. 0 0 0 v1 0 1 v2 = 0 If λ = 1 then (A − λI )v = 0 assumes the form 0 −1 0 −1 −1 v3 0 =⇒ −v2 + v3 = 0 and −v2 − v3 = 0. The solution set of this system is {(r, 0, 0) : r ∈ C} so the eigenvectors corresponding to λ = 1 are v = r(1, 0, 0) where r C. ∈ v1 0 1+i 00 i 1 v2 = 0 If λ = −i then (A − λI )v = 0 assumes the form 0 v3 0 0 −1 i =⇒ v1 = 0 and −v2 + iv3 = 0. The solution set of this system is {(0, si, s) : s ∈ C} so the eigenvectors corresponding to λ = −i are v = s(0, i, 1) where s ∈ C. By Theorem 5.6.8, since the entries of A are real, λ = i has corresponding eigenvectors of the form v = t(0, −i, 1) where t ∈ C. −2 − λ 1 1 0 −1 = 0 ⇐⇒ (λ + 2)(λ2 + 4λ + 5) = 0 ⇐⇒ λ = −2 or −3 − λ λ = −2 ± i. v1 01 0 0 1 1 −1 v2 = 0 If λ = −2 then (A − λI )v = 0 assumes the form 1 3 −1 v3 0 =⇒ v2 = 0 and v1 − v3 = 0. Thus, if we let v3 = r where r ∈ C, then the solution set of this system is {(r, 0, r) : r ∈ C} so the eigenvectors corresponding to λ = −2 are v = r(1, , where r∈ C. 0 1) 0 v1 −i 1 0 −1 v2 = 0 If λ = −2 + i then (A − λI )v = 0 assumes the form 1 1 − i v3 1 3 −1 − i 0 −2 + i −1 − 2i =⇒ v1 + v3 = 0 and v2 + v3 = 0. Thus, if we let v3 = 5s where s ∈ C, then the solution 5 5 set of this system is {((2 − i)s, (1 + 2i)s, 5s) : s ∈ C} so the eigenvectors corresponding to λ = −2 + i are v = s(2 − i, 1 + 2i, 5) where s ∈ C. By Theorem 5.6.8, since the entries of A are real, λ = −2 − i has corresponding eigenvectors of the form v = t(2 + i, 1 − 2i, 5) where t ∈ C. 22. det(A − λI ) = 0 ⇐⇒ 2−λ 3 2 1 −1 − λ 3 −1 1−λ −1 3 0 = 0 ⇐⇒ λ(λ − 2)(λ − 4) = 0 ⇐⇒ λ = 0, λ = 2, or λ = 4. 3−λ 2 −1 3 v1 0 1 0 v2 = 0 If λ = 0 then (A − λI )v = 0 assumes the form 3 2 −1 3 v3 0 =⇒ v1 + 2v2 − 3v3 = 0 and −5v2 + 9v3 = 0. Thus, if we let v3 = 5r where r ∈ R, then the solution set of this system is {(−3r, 9r, 5r) : r ∈ R} so the eigenvectors corresponding to λ = 0 are v = r(−3, 9, 5) where r ∈ R. 23. det(A − λI ) = 0 ⇐⇒ 393 0 −1 3 v1 0 If λ = 2 then (A − λI )v = 0 assumes the form 3 −1 0 v2 = 0 2 −1 1 v3 0 =⇒ v1 − v3 = 0 and v2 − 3v3 = 0. Thus, if we let v3 = s where s ∈ R, then the solution set of this system is {(s, 3s, s) : s ∈ R} so the eigenvectors corresponding to λ = 2 are v = s(1, 3, 1) where s ∈ R. −2 −1 3 v1 0 0 v2 = 0 If λ = 4 then (A − λI )v = 0 assumes the form 3 −3 2 −1 −1 v3 0 =⇒ v1 − v3 = 0 and v2 − v3 = 0. Thus, if we let v3 = t where t ∈ R, then the solution set of this system is {(t, t, t) : t ∈ R} so the eigenvectors corresponding to λ = 4 are v = t(1, 1, 1) where t ∈ R. 24. det(A − λI ) = 0 ⇐⇒ 5−λ 0 0 25. det(A − λI ) = 0 ⇐⇒ −λ 2 2 2 −λ 2 2 2 −λ 0 0 = 0 ⇐⇒ (λ − 5)3 = 0 ⇐⇒ λ = 5 of multiplicity three. 5−λ 000 v1 0 If λ = 5 then (A − λI )v = 0 assumes the form 0 0 0 v2 = 0 000 v3 0 Thus, if we let v1 = r, v2 = s, and v3 = t where r, s, t ∈ R, then the solution set of this system is {(r, s, t) : r, s, t ∈ R} so the eigenvectors corresponding to λ = 5 are v = r(1, 0, 0) + s(0, 1, 0) + t(0, 0, 1) where r, s, t ∈ R. That is, every nonzero vector in R3 is an eigenvector of A corresponding to λ = 5. two. 0 5−λ 0 = 0 ⇐⇒ (λ − 4)(λ + 2)2 = 0 ⇐⇒ λ = 4 or λ = −2 of multiplicity v1 0 −4 2 2 2 v2 = 0 If λ = 4 then (A − λI )v = 0 assumes the form 2 −4 v3 0 2 2 −4 =⇒ v1 − v3 = 0 and v2 − v3 = 0. Thus, if we let v3 = r where r ∈ R, then the solution set of this system is {(r, r, r) : r ∈ R} so the eigenvectors corresponding to λ = 4 are v = (1, 1, 1) where r ∈ R. r v1 0 222 If λ = −2 then (A − λI )v = 0 assumes the form 2 2 2 v2 = 0 0 222 v3 =⇒ v1 + v2 + v3 = 0. Thus, if we let v2 = s and v3 = t where s, t ∈ R, then the solution set of this system is {(−s − t, s, t) : s, t ∈ R} so the eigenvectors corresponding to λ = −2 are v = s(−1, 1, 0) + t(−1, 0, 1) where s, t ∈ R. 1−λ 2 3 4 4 3−λ 2 1 26. det(A − λI ) = 0 ⇐⇒ = 0 ⇐⇒ λ4 − 14λ3 − 32λ2 = 0 4 5 6−λ 7 7 6 5 4−λ ⇐⇒ λ2 (λ − 16)(λ + 2) = 0 ⇐⇒ λ = 16, λ = −2, λ = 0 of multiplicity two. or −15 2 3 4 v1 0 4 −13 2 1 v2 0 = If λ = 16 then (A − λI )v = 0 assumes the form 4 5 −10 7 v3 0 7 6 5 −12 v4 0 =⇒ v1 − 1841v3 + 2078v4 = 0, v2 + 82v3 − 93v4 = 0, and 31v3 − 35v4 = 0. Thus, if we let v4 = 31r where r ∈ R, then the solution set of this system is {(17r, 13r, 35r, 31r) : r ∈ R} so the eigenvectors corresponding to λ = 16 are v = r(17, 13, 35, 31) where r R. 394 v1 0 3234 4 5 2 1 v2 0 = If λ = −2 then (A − λI )v = 0 assumes the form 4 5 8 7 v3 0 v4 0 7656 =⇒ v1 + v4 = 0, v2 − v4 = 0, and v3 + v4 = 0. Thus, if we let v4 = s where s ∈ R, then the solution set of this system is {(−s, s, −s, s) : s ∈ R} so the eigenvectors corresponding to λ = −2 are v = s(−1, 1, −1, 1) where s ∈ R. v1 0 1234 4 3 2 1 v2 0 If λ = 0 then (A − λI )v = 0 assumes the form 4 5 6 7 v3 = 0 v4 0 7654 =⇒ v1 − v3 − 2v4 = 0 and v2 + 2v3 + 3v4 = 0. Thus, if we let v3 = a and v4 = b where a, b ∈ R, then the solution set of this system is {(a + 2b, −2a − 3b, a, b) : a, b ∈ R} so the eigenvectors corresponding to λ = 0 are v = a(1, −2, 1, 0) + b(2, −3, 0, 1) where a, b ∈ R. 27. det(A − λI ) = 0 ⇐⇒ −λ 1 0 0 −1 −λ 0 0 0 0 −λ −1 0 0 1 −λ = 0 ⇐⇒ (λ2 + 1)2 = 0 ⇐⇒ λ = ±i, where each root is of multiplicity two. v1 i10 0 0 −1 i 0 0 v2 0 If λ = −i then (A − λI )v = 0 assumes the form 0 0 i −1 v3 = 0 001 i v4 0 =⇒ v1 − iv2 = 0 and v3 + iv4 = 0. Thus, if we let v2 = r and v4 = s where r, s ∈ C, then the solution set of this system is {(ir, r, −is, s) : r, s ∈ C} so the eigenvectors corresponding to λ = −i are v = r(i, 1, 0, 0) + s(0, 0, −i, 1) where r, s ∈ C. By Theorem 5.6.8, since the entries of A are real, λ = i has corresponding eigenvectors v = a(−i, 1, 0, 0) + b(0, 0, i, 1) where a, b ∈ C. 28. This matrix is lower triangular, and therefore, the eigenvalues appear along the main diagonal of the matrix: λ = 1 + i, 1 − 3i, 1. Note that the eigenvalues do not occur in complex conjugate pairs, but this does not contradict Theorem 5.6.8 because the matrix does not consist entirely of real elements. 1−λ 2 29. (a) p(λ) = det(A − λI2 ) = −1 4−λ = λ2 − 5λ + 6. (b) A2 − 5A + 6I2 = = 1 −1 2 4 1 −1 2 4 −1 −5 10 14 + 1 −1 2 4 −5 −5 5 −10 −20 + +6 6 0 0 6 1 0 0 1 = 0 0 0 0 1 6 1 6 . (c) Using part (b) of this problem: A2 − 5A + 6I2 = 02 ⇐⇒ A−1 (A2 − 5A + 6I2 ) = A−1 · 02 ⇐⇒ A − 5I2 + 6A−1 = 02 ⇐⇒ 6A−1 = 5I2 − A 1 6 1 = 6 ⇐⇒ A−1 = 5 ⇐⇒ A−1 4 −2 1 0 0 1 1 1 − 1 −1 2 4 , or A−1 = 2 3 1 −3 = 02 . 395 30. (a) det(A − λI2 ) = 1−λ 2 2 −2 − λ = 0 ⇐⇒ λ2 + λ − 6 = 0 ⇐⇒ (λ − 2)(λ + 3) = 0 ⇐⇒ λ = 2 or λ = −3. 1 2 2 −2 1 2 10 ∼ = B. 0 −6 01 1−λ 0 = 0 ⇐⇒ (λ − 1)2 = 0 ⇐⇒ λ = 1 of multiplicity two. Matrices A det(B − λI2 ) = 0 ⇐⇒ 0 1−λ and B do not have the same eigenvalues. (b) ∼ 31. A(3v1 − v2 ) = 3Av1 − Av2 = 3(2v1 ) − (−3v2 ) = 6v1 + 3v2 =6 1 −1 +3 2 1 = 6 −6 + 6 3 = 12 −3 . 32. (a) Let a, b, c ∈ R. If v = av1 + bv2 + cv3 , then 5, 0, 3) = a(1, −1, 1) + b(2, 1, 3) + c(−1, −1, 2), or a + 2b − c = 5 −a + b − c = 0 (5, 0, 3) = (a + 2b − c, −a + b − c, a + 3b + 2c). The last equality results in the system: a + 3b + 2c = 3. This system has the solution a = 2, b = 1, and c = −1. Consequently, v = 2v1 + v2 − v3 . (b) Using part (a): Av = A(2v1 + v2 − v3 ) = 2Av1 + Av2 − Av3 = 2(2v1 ) + (−2v2 ) − (3v3 ) = 4v1 − 2v2 − 3v3 = 4(1, −1, 1) − 2(2, 1, 3) − 3(−1, −1, 2) = (3, −3, −8). 33. A(c1 v1 + c2 v2 + c3 v3 ) = A(c1 v1 ) + A(c2 v2 ) + A(c3 v3 ) = c1 (Av1 ) + c2 (Av2 ) + c3 (Av3 ) = c1 (λv1 ) + c2 (λv2 ) + c3 (λv3 ) = λ(c1 v1 + c2 v2 + c3 v3 ). Thus, c1 v1 + c2 v2 + c3 v3 is an eigenvector of A corresponding to the eigenvalue λ. 34. Recall that the determinant of an upper (lower) triangular matrix is just the product of its main diagonal elements. Let A be an n × n upper (lower) triangular matrix. It follows that A − λIn is an upper (lower) triangular matrix with main diagonal element aii − λ, i = 1, 2, . . . , n. Consequently, n det(A − λIn ) = 0 ⇐⇒ (aii − λ) = 0. i=1 This implies that λ = a11 , a22 , . . . , ann . 35. Any scalar λ such that det(A − λI ) = 0 is an eigenvalue of A. Therefore, if 0 is an eigenvalue of A, then det(A − 0 · I ) = 0, or det(A) = 0, which implies that A is not invertible. On the other hand, if 0 is not an eigenvalue of A, then det(A − 0 · I ) = 0, or det(A) = 0, which implies that A is invertible. 36. A is invertible, so A−1 exists. Also, λ is an eigenvalue of A so that Av = λv. Thus, 1 A−1 (Av) = A−1 (λv) =⇒ (A−1 A)v = λA−1 v =⇒ In v = λA−1 v =⇒ v = λA−1 v =⇒ v = A−1 v. λ 1 Therefore is an eigenvalue of A−1 provided that λ is an eigenvalue of A. λ 37. By assumption, we have Av = λv and B v = µv. 396 (a) Therefore, (AB )v = A(B v) = A(µv) = µ(Av) = µ(λv) = (λµ)v, which shows that v is an eigenvector of AB with corresponding eigenvalue λµ. (b) Also, (A + B )v = Av + B v = λv + µv = (λ + µ)v, which shows that v is an eigenvector of A + B with corresponding eigenvalue λ + µ. 38. Recall that a matrix and its transpose have the same determinant. Thus, det(A − λIn ) = det([A − λIn ]T ) = det(AT − λIn ). Since A and AT have the same characteristic polynomial, it follows that both matrices also have the same eigenvalues. 39. (a) v = r + is is an eigenvector with eigenvalue λ = a + bi, b = 0 =⇒ Av = λv =⇒ A(r + is) = (a + bi)(r + is) = (ar − bs) + i(as + br) =⇒ Ar = ar − bs and As = as + br. Now if r = 0, then A0 = a0 − bs =⇒ 0 = 0 − bs =⇒ 0 = bs =⇒ s = 0 since b = 0. This would mean that v = 0 so v could not be an eigenvector. Thus, it must be that r = 0. Similarly, if s = 0, then r = 0, and again, this would contradict the fact that v is an eigenvector. Hence, it must be the case that r = 0 and s = 0. (b) As in part (a), Ar = ar − bs and As = as + br. Let c1 , c2 ∈ R. Then if c1 r + c2 s = 0, (39.1) we have A(c1 r + c2 s) = 0 =⇒ c1 Ar + c2 As = 0 =⇒ c1 (ar − bs) + c2 (as + br) = 0. Hence, (c1 a + c2 b)r + (c2 a − c1 b)s = 0 =⇒ a(c1 r + c2 s) + b(c2 r − c1 s) = 0 =⇒ b(c2 r − c1 s) = 0 where we have used (39.1). Since b = 0, we must have c2 r − c1 s = 0. Combining this with (39.1) yields c1 = c2 = 0. Therefore, it follows that r and s are linearly independent vectors. 40. λ1 = 2, v = r(−1, 1). λ2 = 5, v = s(1, 2). 41. λ1 = −2 (multiplicity two), v = r(1, 1, 1). λ2 = −5, v = s(20, 11, 14). 42. λ1 = 3 (multiplicity two), v = r(1, 0, −1) + s(0, 1, −1). λ2 = 6, v = t(1, 1, 1). √ √ √ √ √ √ √ √ 43. λ1 = 3 − 6, v = r( 6, −1 + 6, −5 + 6). λ2 = 3 + 6, v = s( 6, 1 + 6, 5 + 6), λ3 = −2, v = t(−1, 3, 0). 44. λ1 = 0, v = r(2, 2, 1). λ2 = 3i, v = s(−4 − 3i, 5, −2 + 6i), λ3 = −3i, v = t(−4 + 3i, 5, −2 − 6i). 45. λ1 = −1 (multiplicity four), v = a(−1, 0, 0, 1, 0) + b(−1, 0, 1, 0, 0) + c(−1, 0, 0, 0, 1) + d(−1, 1, 0, 0, 0). Solutions to Section 5.7 True-False Review: 1. TRUE. This is the deﬁnition of a nondefective matrix. 2. TRUE. The eigenspace Eλ is equal to the null space of the n × n matrix A − λI , and this null space is a subspace of Rn . 3. TRUE. The dimension of an eigenspace never exceeds the algebraic multiplicity of the corresponding eigenvalue. 397 4. TRUE. Eigenvectors corresponding to distinct eigenspaces are linearly independent. Therefore if we choose one (nonzero) vector from each distinct eigenspace, the chosen vectors will form a linearly independent set. 5. TRUE. Since each eigenvalue of the matrix A occurs with algebraic multiplicity 1, we can simply choose one eigenvector from each eigenspace to obtain a basis of eigenvectors for A. Thus, A is nondefective. 6. FALSE. Many examples will show that this statement is false, including the n × n identity matrix In for n ≥ 2. The matrix In is not defective, and yet, has λ = 1 occurring with algebraic multiplicity n. 7. TRUE. Eigenvectors corresponding to distinct eigenvalues are always linearly independent, as proved in the text in this section. Problems: 1. det(A − λI ) = 0 ⇐⇒ 1−λ 2 4 3−λ = 0 ⇐⇒ λ2 − 4λ − 5 = 0 ⇐⇒ (λ − 5)(λ + 1) = 0 ⇐⇒ λ = 5 or λ = −1. −4 4 v1 0 = =⇒ v1 − v2 = 0. The solution 2 −2 v2 0 set of this system is {(r, r) : r ∈ R}, so the eigenspace corresponding to λ1 = 5 is E1 = {v ∈ R2 : v = r(1, 1), r ∈ R}. A basis for E1 is {(1, 1)}, and dim[E1 ] = 1. 24 v1 0 = =⇒ v1 + 2v2 = 0. The solution If λ2 = −1 then (A − λI )v = 0 assumes the form 24 v2 0 set of this system is {(−2s, s) : s ∈ R}, so the eigenspace corresponding to λ2 = −1 is E2 = {v ∈ R2 : v = s(−2, 1), s ∈ R}. A basis for E2 is {(−2, 1)}, and dim[E2 ] = 1. A complete set of eigenvectors for A is given by {(1, 1), (−2, 1)}, so A is nondefective. If λ1 = 5 then (A − λI )v = 0 assumes the form 2. det(A − λI ) = 0 ⇐⇒ 3−λ 0 0 3−λ = 0 ⇐⇒ (3 − λ)2 = 0 ⇐⇒ λ = 3 of multiplicity two. 00 v1 0 = . 00 v2 0 The solution set of this system is {(r, s) : r, s ∈ R}, so the eigenspace corresponding to λ1 = 3 is E1 = {v ∈ R2 : v = r(1, 0) + s(0, 1), r, s ∈ R}. A basis for E1 is {(1, 0), (0, 1)}, and dim[E1 ] = 2. A is nondefective. 1−λ 2 3. det(A − λI ) = 0 ⇐⇒ = 0 ⇐⇒ (λ − 3)2 = 0 ⇐⇒ λ = 3 of multiplicity two. −2 5−λ −2 2 v1 0 = =⇒ v1 − v2 = 0. The solution If λ1 = 3 then (A − λI )v = 0 assumes the form −2 2 v2 0 set of this system is {(r, r) : r ∈ R}, so the eigenspace corresponding to λ1 = 3 is E1 = {v ∈ R2 : v = r(1, 1), r ∈ R}. A basis for E1 is {(1, 1)}, and dim[E1 ] = 1. A is defective since it does not have a complete set of eigenvectors. If λ1 = 3 then (A − λI )v = 0 assumes the form 4. det(A − λI ) = 0 ⇐⇒ 5−λ −2 5 −1 − λ = 0 ⇐⇒ λ2 − 4λ + 5 = 0 ⇐⇒ λ = 2 ± i. 3+i 5 v1 0 = =⇒ −2v1 +(−3+i)v2 = −2 −3 + i v2 0 0. The solution set of this system is {((−3 + i)r, 2r) : r ∈ C}, so the eigenspace corresponding to λ1 = 2 − i is E1 = {v ∈ C2 : v = r(−3 + i, 2), r ∈ C}. A basis for E1 is {(−3 + i, 2)}, and dim[E1 ] = 1. If λ2 = 2 + i then from Theorem 5.6.8, the eigenvectors corresponding to λ2 = 2 + i are v = s(−3 − i, 2) where s ∈ C, so the eigenspace corresponding to λ2 = 2 + i is E2 = {v ∈ C2 : v = s(−3 − i, 2), s ∈ C}. A basis for E2 is {(−3 − i, 2)}, and dim[E2 ] = 1. A is nondefective since it has a complete set of eigenvectors, If λ1 = 2 − i then (A−λI )v = 0 assumes the form 398 namely {(−3 + i, 2), (−3 − i, 2)}. 5. det(A − λI ) = 0 ⇐⇒ 3−λ 0 0 −4 −1 − λ −4 −1 −1 2−λ = 0 ⇐⇒ (λ + 2)(λ − 3)2 = 0 ⇐⇒ λ = −2 or λ = 3 of multiplicity two. 5 −4 −1 v1 0 1 −1 v2 = 0 . If λ1 = −2 then (A − λI )v = 0 assumes the form 0 0 −4 4 v3 0 The solution set of this system is {(r, r, r) : r ∈ R}, so the eigenspace corresponding to λ1 = −2 is E1 = {v ∈ R3 : v = r(1, 1, 1), r ∈ R}. A basis for E1 is (1, 1, 1)}, and E1= 1. { dim[ ] 0 −4 −1 v1 0 If λ2 = 3 then (A − λI )v = 0 assumes the form 0 −4 −1 v2 = 0 =⇒ 4v2 + v3 = 0. The 0 −4 −1 v3 0 solution set of this system is {(s, t, −4t) : s, t ∈ R}, so the eigenspace corresponding to λ2 = 3 is E2 = {v ∈ R3 : v = s(1, 0, 0) + t(0, 1, −4), s, t ∈ R}. A basis for E2 is {(1, 0, 0), (0, 1, −4)}, and dim[E2 ] = 2. A complete set of eigenvectors for A is given by {(1, 1, 1), (1, 0, 0), (0, 1, −4)}, so A is nondefective. 6. det(A − λI ) = 0 ⇐⇒ 4−λ 0 0 0 2−λ −2 0 −3 1−λ = 0 ⇐⇒ (λ + 1)(λ − 4)2 = 0 ⇐⇒ λ = −1 or λ = 4 of multiplicity two. 0 v1 5 0 0 3 −3 v2 = 0 If λ1 = −1 then (A − λI )v = 0 assumes the form 0 v3 0 −2 2 0 =⇒ v1 = 0 and v2 − v3 = 0. The solution set of this system is {(0, r, r) : r ∈ R}, so the eigenspace corresponding to λ1 = −1 is E1 = {v ∈ R3 : v = r(0, 1, 1), r ∈ R}. A basis for E1 is {(0, 1, 1)}, and dim[E1 ] = 1. v1 0 0 0 0 0 −2 −3 v2 = 0 =⇒ 2v2 + 3v3 = 0. If λ2 = 4 then (A − λI )v = 0 assumes the form 0 −2 −3 v3 0 The solution set of this system is {(s, 3t, −2t) : s, t ∈ R}, so the eigenspace corresponding to λ2 = 4 is E2 = {v ∈ R3 : v = s(1, 0, 0) + t(0, 3, −2), s, t ∈ R}. A basis for E2 is {(1, 0, 0), (0, −1, 1)}, and dim[E2 ] = 2. A complete set of eigenvectors for A is given by {(1, 0, 0), (0, 3, −2), (0, 1, 1)}, so A is nondefective. 7. det(A − λI ) = 0 ⇐⇒ 3−λ −1 0 1 5−λ 0 0 0 4−λ = 0 ⇐⇒ (λ − 4)3 = 0 ⇐⇒ λ = 4 of multiplicity three. −1 1 0 v1 0 If λ1 = 4 then (A − λI )v = 0 assumes the form −1 1 0 v2 = 0 000 v3 0 =⇒ v1 − v2 = 0 and v3 ∈ R. The solution set of this system is {(r, r, s) : r, s ∈ R}, so the eigenspace corresponding to λ1 = 4 is E1 = {v ∈ R3 : v = r(1, 1, 0) + s(0, 0, 1), r, s ∈ R}. A basis for E1 is {(1, 1, 0), (0, 0, 1)}, and dim[E1 ] = 2. A is defective since it does not have a complete set of eigenvectors. 8. det(A − λI ) = 0 ⇐⇒ 3−λ 2 1 0 0 −λ −4 4 −λ = 0 ⇐⇒ (λ − 3)(λ2 + 16) = 0 ⇐⇒ λ = 3 or λ = ±4i. 399 0 0 0 v1 0 If λ1 = 3 then (A − λI )v = 0 assumes the form 2 −3 −4 v2 = 0 1 4 −3 v3 0 =⇒ 11v1 − 25v3 = 0 and 11v2 − 2v3 . The solution set of this system is {(25r, 2r, 11r) : r ∈ C}, so the eigenspace corresponding to λ1 = 3 is E1 = {v ∈ C3 : v = r(25, 2, 11), r ∈ C}. A basis for E1 is {(25, 2, 11)}, and dim[E1 ] = 1. 3 + 4i 0 0 v1 0 2 4i −4 v2 = 0 If λ2 = −4i then (A − λI )v = 0 assumes the form 1 4 4i v3 0 =⇒ v1 = 0 and iv2 − v3 = 0. The solution set of this system is {(0, s, is) : s ∈ C}, so the eigenspace corresponding to λ2 = −4i is E2 = {v ∈ C3 : v = s(0, 1, i), s ∈ C}. A basis for E2 is {(0, 1, i)}, and dim[E2 ] = 1. If λ3 = 4i then from Theorem 5.6.8, the eigenvectors corresponding to λ3 = 4i are v = t(0, 1, −i) where t ∈ C, so the eigenspace corresponding to λ3 = 4i is E3 = {v ∈ C3 : v = t(0, 1, −i), t ∈ C}. A basis for E3 is {(0, 1, −i)}, and dim[E3 ] = 1. A complete set of eigenvectors for A is given by {(25, 2, 11), (0, 1, i), (0, 1, −i)}, so A is nondefective. 9. det(A − λI ) = 0 ⇐⇒ 4−λ −4 0 1 6 −λ −7 0 −3 − λ = 0 ⇐⇒ (λ + 3)(λ − 2)2 = 0 ⇐⇒ λ = −3 or λ = 2 of multiplicity two. v1 71 6 0 If λ1 = −3 then (A − λI )v = 0 assumes the form −4 3 −7 v2 = 0 00 0 0 v3 =⇒ v1 + v3 = 0 and v2 − v3 = 0. The solution set of this system is {(−r, r, r) : r ∈ R}, so the eigenspace corresponding to λ1 = −3 is E1 = {v ∈ R3 : v = r(−1, 1, 1), r ∈ R}. A basis for E1 is {(−1, 1, 1)}, and dim[E1 ] = 1. v1 0 2 1 6 If λ2 = 2 then (A − λI )v = 0 assumes the form −4 −2 −7 v2 = 0 v3 0 0 0 5 =⇒ 2v1 + v2 = 0 and v3 = 0. The solution set of this system is {(−s, 2s, 0) : s ∈ R}, so the eigenspace corresponding to λ2 = 2 is E2 = {v ∈ R3 : v = s(−1, 2, 0), s ∈ R}. A basis for E2 is {(−1, 2, 0)}, and dim[E2 ] = 1. A is defective because it does not have a complete set of eigenvectors. 2−λ 0 0 0 0 = 0 ⇐⇒ (λ − 2)3 = 0 ⇐⇒ λ = 2 of multiplicity three. 2−λ 000 v1 0 If λ1 = 2 then (A − λI )v = 0 assumes the form 0 0 0 v2 = 0 . The solution set of this 000 v3 0 system is {(r, s, t) : r, s, t ∈ R}, so the eigenspace corresponding to λ1 = 2 is E1 = {v ∈ R3 : v = r(1, 0, 0) + s(0, 1, 0) + t(0, 0, 1), r, s, t ∈ R}. A basis for E1 is {(1, 0, 0), (0, 1, 0), (0, 0, 1)}, and dim[E1 ] = 3. A is nondefective since it has a complete set of eigenvectors. 10. det(A − λI ) = 0 ⇐⇒ 11. det(A − λI ) = 0 ⇐⇒ three. 7−λ 8 0 0 2−λ 0 −8 −9 − λ 0 6 6 −1 − λ = 0 ⇐⇒ (λ + 1)3 = 0 ⇐⇒ λ = −1 of multiplicity 400 8 −8 6 v1 0 If λ1 = −1 then (A − λI )v = 0 assumes the form 8 −8 6 v2 = 0 =⇒ 4v1 − 4v2 + 3v3 = 0. 0 00 v3 0 The solution set of this system is {(r − 3s, r, 4s) : r, s ∈ R}, so the eigenspace corresponding to λ1 = −1 is E1 = {v ∈ R3 : v = r(1, 1, 0) + s(−3, 0, 4), r, s ∈ R}. A basis for E1 is {(1, 1, 0), (−3, 0, 4)}, and dim[E1 ] = 2. A is defective since it does not have a complete set of eigenvectors. −1 −1 = 0 ⇐⇒ (λ − 2)λ2 = 0 ⇐⇒ λ = 2 or λ = 0 of −1 − λ multiplicity two. 0 2 −1 v1 0 If λ1 = 2 then (A − λI )v = 0 assumes the form 2 −1 −1 v2 = 0 2 3 −3 v3 0 =⇒ 4v1 − 3v3 = 0 and 2v2 − v3 = 0. The solution set of this system is {(3r, 2r, 4r) : r ∈ R}, so the eigenspace corresponding to λ1 = 2 is E1 = {v ∈ R3 : v = r(3, 2, 4), r ∈ R}. A basis for E1 is {(3, 2, 4)}, and dim[E1 ] = 1. v1 0 2 2 −1 If λ2 = 0 then (A − λI )v = 0 assumes the form 2 1 −1 v2 = 0 0 2 3 −1 v3 =⇒ 2v1 − v3 = 0 and v2 = 0. The solution set of this system is {(s, 0, 2s) : s ∈ R}, so the eigenspace corresponding to λ2 = 0 is E2 = {v ∈ R3 : v = s(1, 0, 2), s ∈ R}. A basis for E2 is {(1, 0, 2)}, and dim[E2 ] = 1. A is defective because it does not have a complete set of eigenvectors. 12. det(A − λI ) = 0 ⇐⇒ 2−λ 2 2 2 1−λ 3 1−λ 1 1 −1 −1 − λ −1 2 2 = 0 ⇐⇒ (λ − 2)λ2 = 0 ⇐⇒ λ = 2 or λ = 0 of 2−λ multiplicity two. 0 v1 −1 −1 2 If λ1 = 2 then (A − λI )v = 0 assumes the form 1 −3 2 v2 = 0 v3 1 −1 0 0 =⇒ v1 − v3 = 0 and v2 − v3 = 0. The solution set of this system is {(r, r, r) : r ∈ R}, so the eigenspace corresponding to λ1 = 2 is E1 = {v ∈ R3 : v = r(1, 1, 1), r ∈ R}. A basis for E1 is {(1, 1, 1)}, and dim[E1 ] = 1. 1 −1 2 v1 0 If λ2 = 0 then (A − λI )v = 0 assumes the form 1 −1 2 v2 = 0 =⇒ v1 − v2 + 2v3 = 0. 1 −1 2 v3 0 The solution set of this system is {(s − 2t, s, t) : s, t ∈ R}, so the eigenspace corresponding to λ2 = 0 is E2 = {v ∈ R3 : v = s(1, 1, 0) + t(−2, 0, 1), s, t ∈ R}. A basis for E2 is {(1, 1, 0), (−2, 0, 1)}, and dim[E2 ] = 2. A is nondefective because it has a complete set of eigenvectors. 13. det(A − λI ) = 0 ⇐⇒ 2−λ −1 −2 3 0 −λ 1 = 0 ⇐⇒ (λ − 2)3 = 0 ⇐⇒ λ = 2 of multiplicity three. −1 4 − λ 0 30 v1 0 −1 −2 1 v2 = 0 If λ1 = 2 then (A − λI )v = 0 assumes the form −2 −1 2 v3 0 =⇒ v1 − v3 = 0 and v2 = 0. The solution set of this system is {(r, 0, r) : r ∈ R}, so the eigenspace corresponding to λ1 = 2 is E1 = {v ∈ R3 : v = r(1, 0, 1), r ∈ R}. A basis for E1 is {(1, 0, 1)}, and 14. det(A − λI ) = 0 ⇐⇒ 401 dim[E1 ] = 1. A is defective since it does not have a complete set of eigenvectors. 15. det(A − λI ) = 0 ⇐⇒ −λ −1 −1 −1 −λ −1 −1 −1 −λ = 0 ⇐⇒ (λ + 2)(λ − 1)2 = 0 ⇐⇒ λ = −2 or λ = 1 of multiplicity two. 2 −1 −1 v1 0 2 −1 v2 = 0 If λ1 = −2 then (A − λI )v = 0 assumes the form −1 −1 −1 2 v3 0 =⇒ v1 − v3 = 0 and v2 − v3 = 0. The solution set of this system is {(r, r, r) : r ∈ R}, so the eigenspace corresponding to λ1 = −2 is E1 = {v ∈ R3 : v = r(1, 1, 1), r ∈ R}. A basis for E1 is {(1, 1, 1)}, and dim[E1 ] = 1. −1 −1 −1 v1 0 If λ2 = 1 then (A − λI )v = 0 assumes the form −1 −1 −1 v2 = 0 =⇒ v1 + v2 + v3 = 0. −1 −1 −1 v3 0 The solution set of this system is {(−s − t, s, t) : s, t ∈ R}, so the eigenspace corresponding to λ2 = 1 is E2 = {v ∈ R3 : v = s(−1, 1, 0) + t(−1, 0, 1), s, t ∈ R}. A basis for E2 is {(−1, 1, 0), (−1, 0, 1)}, and dim[E2 ] = 2. A is nondefective because it has a complete set of eigenvectors. 16. (λ − 4)(λ + 1) = 0 ⇐⇒ λ = 4 or λ = −1. Since A has two distinct eigenvalues, it has two linearly independent eigenvectors and is, therefore, nondefective. 17. (λ − 1)2 = 0 ⇐⇒ λ = 1 of multiplicity two. 5 5 v1 0 = =⇒ v1 + v2 = 0. The solution −5 −5 v2 0 set of this system is {(−r, r) : r ∈ R}, so the eigenspace corresponding to λ1 = 1 is E1 = {v ∈ R2 : v = r(−1, 1), r ∈ R}. A basis for E1 is {(1, 1)}, and dim[E1 ] = 1. A is defective since it does not have a complete set of eigenvectors. If λ1 = 1 then (A − λI )v = 0 assumes the form 18. λ2 − 4λ + 13 = 0 ⇐⇒ λ = 2 ± 3i. Since A has two distinct eigenvalues, it has two linearly independent eigenvectors and is, therefore, nondefective. 19. (λ − 2)2 (λ + 1) = 0 ⇐⇒ λ = −1 or λ = 2 of multiplicity two. To determine whether A is nondefective, all we require is the dimension of the eigenspacecorresponding λ = 2. to −1 −3 1 v1 0 If λ = 2 then (A − λI )v = 0 assumes the form −1 −3 1 v2 = 0 =⇒ −v1 − 3v2 + v3 = 0. −1 −3 1 v3 0 The solution set of this system is {(−3s + t, s, t) : s, t ∈ R}, so the eigenspace corresponding to λ = 2 is E = {v ∈ R3 : v = s(−3, 1, 0) + t(1, 0, 1), s, t ∈ R}. Since dim[E ] = 2, A is nondefective. 20. (λ − 3)3 = 0 ⇐⇒ λ = 3 of multiplicity three. −4 2 2 v1 0 If λ = 3 then (A − λI )v = 0 assumes the form −4 2 2 v2 = 0 =⇒ −2v1 + v2 + v3 = 0 and −4 2 2 v3 0 v2 = 0. The solution set of this system is {(r, 2r − s, s) : r, s ∈ R}, so the eigenspace corresponding to λ = 3 is E = {v ∈ R3 : v = r(1, 2, 0) + s(0, −1, 1), r, s ∈ R}. A is defective since it does not have a complete set of eigenvectors. 21. det(A − λI ) = 0 ⇐⇒ 2−λ 3 1 4−λ = 0 ⇐⇒ (λ − 1)(λ − 5) = 0 ⇐⇒ λ = 1 or λ = 5. 402 11 v1 0 = =⇒ v1 + v2 = 0. The eigenspace 33 v2 0 corresponding to λ1 = 1 is E1 = {v ∈ R2 : v = r(−1, 1), r ∈ R}. A basis for E1 is {(−1, 1)}. −3 1 v1 0 = If λ2 = 5 then (A − λI )v = 0 assumes the form =⇒ 3v1 − v2 = 0. The 3 −1 v2 0 eigenspace corresponding to λ2 = 5 is E2 = {v ∈ R2 : v = s(1, 3), s ∈ R}. A basis for E2 is {(1, 3)}. If λ1 = 1 then (A − λI )v = 0 assumes the form v2 3 2 1 E1 E2 v1 -2 -1 -1 1 2 Figure 73: Figure for Problem 21 22. det(A − λI ) = 0 ⇐⇒ 2−λ 0 3 2−λ = 0 ⇐⇒ (2 − λ)2 = 0 ⇐⇒ λ = 2 of multiplicity two. 03 v1 0 = =⇒ v1 ∈ R and v2 = 0. The 00 v2 0 eigenspace corresponding to λ1 = 2 is E1 = {v ∈ R2 : v = r(1, 0), r ∈ R}. A basis for E1 is {(1, 0)}. If λ1 = 2 then (A − λI )v = 0 assumes the form v2 E1 v1 Figure 74: Figure for Problem 22 23. det(A − λI ) = 0 ⇐⇒ 5−λ 0 0 5−λ = 0 ⇐⇒ (5 − λ)2 = 0 ⇐⇒ λ = 5 of multiplicity two. 00 v1 0 = =⇒ v1 , v2 ∈ R. The eigenspace 00 v2 0 corresponding to λ1 = 5 is E1 = {v ∈ R2 : v = r(1, 0) + s(0, 1), r, s ∈ R}. A basis for E1 is {(1, 0), (0, 1)}. If λ1 = 5 then (A − λI )v = 0 assumes the form 403 v2 E1 is the whole of R2 v1 Figure 75: Figure for Problem 23 24. det(A − λI ) = 0 ⇐⇒ 3−λ 1 −1 1 3−λ −1 multiplicity two. −1 −1 3−λ = 0 ⇐⇒ (λ − 5)(λ − 2)2 = 0 ⇐⇒ λ = 5 or λ = 2 of v1 −2 1 −1 0 If λ1 = 5 then (A − λI )v = 0 assumes the form 1 −2 −1 v2 = 0 −1 −1 −2 v3 0 =⇒ v1 + v3 = 0 and v2 + v3 = 0. The eigenspace corresponding to λ1 = 5 is E1 = {v ∈ R3 : v = r(1, 1, −1), r ∈ R}. A basis for E1 is {(1, 1, −1)}. v1 0 1 1 −1 1 −1 v2 = 0 =⇒ v1 + v2 − v3 = 0. If λ2 = 2 then (A − λI )v = 0 assumes the form 1 v3 0 −1 −1 1 The eigenspace corresponding to λ2 = 2 is E2 = {v ∈ R3 : v = s(−1, 1, 0) + t(1, 0, 1), s, t ∈ R}. A basis for E2 is {(−1, 1, 0), (1, 0, 1)}. v3 E2 (1, 0, 1) (-1, 1, 0) v2 E1 V1 (1, 1, -1) Figure 76: Figure for Problem 24 25. det(A − λI ) = 0 ⇐⇒ three. −3 − λ −1 0 1 −1 − λ 0 0 2 −2 − λ = 0 ⇐⇒ (λ + 2)3 = 0 ⇐⇒ λ = −2 of multiplicity −1 1 0 v1 0 If λ1 = −2 then (A − λI )v = 0 assumes the form −1 1 2 v2 = 0 =⇒ v1 − v2 = 0 and 000 v3 0 v3 = 0. The eigenspace corresponding to λ1 = −2 is E1 = {v ∈ R3 : v = r(1, 1, 0), r ∈ R}. A basis for E1 is {(1, 1, 0)}. 404 v3 E1 v2 V1 (1, 1, 0) Figure 77: Figure for Problem 25 1 −2 3 v1 0 26. (a) If λ1 = 1 then (A − λI )v = 0 assumes the form 1 −2 3 v2 = 0 1 −2 3 v3 0 =⇒ v1 − 2v2 + 3v3 = 0. The eigenspace corresponding to λ1 = 1 is E1 = {v ∈ R3 : v = r(2, 1, 0) + s(−3, 0, 1), r, s ∈ R}. A basis for E1 is {(2, 1, 0), (−3, 0, 1)}. Now apply the Gram-Schmidt process where v1 = (−3, 0, 1), and v2 = (2, 1, 0). Let u1 = v1 so that v2 , u1 = (2, 1, 0), (−3, 0, 1) = 2(−3) + 1 · 0 + 0 · 1 = −6 and ||u1 ||2 = (−3)2 + 02 + 12 = 10. 6 1 v2 , u1 u1 = (2, 1, 0) + (−3, 0, 1) = (1, 5, 3). u2 = v2 − ||u1 ||2 10 5 Thus, {(−3, 0, 1), (1, 5, 3)} is an orthogonal basis for E1 . −1 −2 3 v1 0 (b) If λ2 = 3 then (A − λI )v = 0 assumes the form 1 −4 3 v2 = 0 =⇒ v1 − v2 = 0 and 1 −2 1 0 v3 v2 − v3 = 0. The eigenspace corresponding to λ2 = 3 is E2 = {v ∈ R3 : v = r(1, 1, 1), r ∈ R}. A basis for E2 is {(1, 1, 1)}. To determine the orthogonality of the vectors, consider the following inner products: (−3, 0, 1), (1, 1, 1) = −3 + 0 + 1 = −2 = 0 and (1, 5, 3), (1, 1, 1) = 1 + 5 + 3 = 9 = 0. Thus, the vectors in E1 are not orthogonal to the vectors in E2 . −1 −1 1 v1 0 1 v2 = 0 27. (a) If λ1 = 2 then (A − λI )v = 0 assumes the form −1 −1 1 1 −1 v3 0 =⇒ v1 + v2 − v3 = 0. The eigenspace corresponding to λ1 = 2 is E1 = {v ∈ R3 : v = r(−1, 1, 0) + s(1, 0, 1), r, s ∈ R}. A basis for E1 is {(−1, 1, 0), (1, 0, 1)}. Now apply the Gram-Schmidt process where v1 = (1, 0, 1), and v2 = (−1, 1, 0). Let u1 = v1 so that v2 , u1 = (−1, 1, 0), (1, 0, 1) = −1 · 1 + 1 · 0 + 0 · 1 = −1 and ||u1 ||2 = 12 + 02 + 12 = 2. v2 , u1 1 1 u2 = v2 − u1 = (−1, 1, 0) + (1, 0, 1) = (−1, 2, 1). 2 ||u1 || 2 2 Thus, {(1, 0, 1), (−1, 2, 1)} is an orthogonal basis for E1 . 2 −1 1 v1 0 2 1 v2 = 0 . The eigenspace (b) If λ2 = −1 then (A − λI )v = 0 assumes the form −1 1 12 v3 0 3 corresponding to λ2 = −1 is E2 = {v ∈ R : v = r(−1, −1, 1), r ∈ R}. A basis for E2 is {(−1, −1, 1)}. To determine the orthogonality of the vectors, consider the following inner products: (1, 0, 1), (−1, −1, 1) = −1 + 0 + 1 = 0 and (−1, 2, 1), (−1, −1, 1) = 1 − 2 + 1 = 0. Thus, the vectors in E1 are orthogonal to the vectors in E2 . 405 28. We are given that the eigenvalues of A are λ1 = 0 (multiplicity two), and λ2 = a + b + c. cases to consider: λ1 = λ2 or λ1 = λ2 . ab If λ1 = λ2 then λ1 = 0 is of multiplicity three, and (A − λI )v = 0 assumes the form a b ab 0 0 , or equivalently, 0 av1 + bv2 + cv3 = 0. There are two c v1 c v2 = c v3 (28.1) The only way to have three linearly independent eigenvectors for A is if a = b = c = 0. If λ1 = λ2 then λ1 = 0 is of multiplicity two, and λ2 = a + b + c = 0 are distinct eigenvalues. By Theorem 5.7.11, E1 must have dimension two for A to possess a complete set of eigenvectors. The system for determining the eigenvectors corresponding to λ1 = 0 is once more given by (28.1). Since we can choose two variables freely in (28.1), it follows that there are indeed two corresponding linearly independent eigenvectors. Consequently, A is nondefective in this case. 29. (a) Setting λ = 0 in (5.7.4), we have p(λ) = det(A − λI ) = det(A), and in (5.7.5), we have p(λ) = p(0) = bn . Thus, bn = det(A). The value of det(A − λI ) is the sum of products of its elements, one taken from each row and each column. Expanding det(A − λI ) yields equation (5.7.5). The expression involving λn in p(λ) comes from the product, n (aii − λ), of the diagonal elements. All the remaining products of the determinant have degree not higher i=1 than n − 2, since, if one of the factors of the product is aij , where i = j , then this product cannot contain the factors λ − aii and λ − ajj . Hence, n (aii − λ) + (terms of degree not higher than n − 2) p(λ) = i=1 so, p(λ) = (−1)n λn + (−1)n−1 (a11 + a22 + · · · + ann )λn−1 + · · · + an . Equating like coeﬃcients from (5.7.5), it follows that b1 = (−1)n−1 (a11 + a22 + · · · + ann ). n i=1 n therefore bn = n (λi − 0) or p(0) = (b) Letting λ = 0, we have from (5.7.6) that p(0) = λi , but from (5.7.5), p(0) = bn , i=1 λi . Letting λ = 1, we have from (5.7.6) that i=1 n n (λ − λi ) = (−1)n [λn − (λ1 + λ2 + · · · + λn )λn−1 + · · · + bn ]. (λi − λ) = (−1)n p(λ) = i=1 i=1 Equating like coeﬃcients with (5.7.5), it follows that b1 = (−1)n−1 (λ1 + λ2 + · · · + λn ). (c) From (a), bn = det(A), and from (b), det(A) = λ1 λ2 · · · λn , so det(A) is the product of the eigenvalues of A. From (a), b1 = (−1)n−1 (a11 + a22 + · · · + ann ) and from (b), b1 = (−1)n−1 (λ1 + λ2 + · · · + λn ), thus a11 + a22 + · · · + ann = λ1 + λ2 + · · · + λn . That is, tr(A) is the sum of the eigenvalues of A. 30. 406 (a) We have det(A) = 19 and tr(A) = 3, so the product of the eigenvalues of A is 19, and the sum of the eigenvalues of A is 3. (b) We have det(A) = −69 and tr(A) = 1, so the product of the eigenvalues of A is -69, and the sum of the eigenvalues of A is 1. (c) We have det(A) = −607 and tr(A) = 24, so the product of the eigenvalues of A is -607, and the sum of the eigenvalues of A is 24. 31. Note that Ei = ∅ since 0 belongs to Ei . Closure under Addition: Let v1 , v2 ∈ Ei . Then A(v1 + v2 ) = Av1 + Av2 = λi v1 + λi v2 = λi (v1 + v2 ) =⇒ v1 + v2 ∈ Ei . Closure under Scalar Multiplication: Let c ∈ C and v1 ∈ Ei . Then A(cv1 ) = c(Av1 ) = c(λi v1 ) = λi (cv1 ) =⇒ cv1 ∈ Ei . Thus, by Theorem 4.3.2, Ei is a subspace of C n . 32. The condition c1 v1 + c2 v2 = 0 (32.1) =⇒ A(c1 v1 ) + A(c2 v2 ) = 0 =⇒ c1 Av1 + c2 Av2 = 0 =⇒ c1 (λ1 v1 ) + c2 (λ2 v2 ) = 0. (32.2) Substituting c2 v2 from (32.1) into (32.2) yields (λ1 − λ2 )c1 v1 = 0. Since λ2 = λ1 , we must have c1 v1 = 0, but v1 = 0, so c1 = 0. Substituting into (32.1) yields c2 = 0 also. Consequently, v1 and v2 are linearly independent. 33. Consider c1 v1 + c2 v2 + c3 v3 = 0. (33.1) If c1 = 0, then the preceding equation can be written as w1 + w2 = 0, where w1 = c1 v1 and w2 = c2 v2 + c3 v3 . But this would imply that {w1 , w2 } is linearly dependent, which would contradict Theorem 5.7.5 since w1 and w2 are eigenvectors corresponding to diﬀerent eigenvalues. Consequently, we must have c1 = 0. But then (33.1) implies that c2 = c3 = 0 since {v1 , v2 } is a linearly independent set by assumption. Hence {v1 , v2 , v3 } is linearly independent. 34. λ1 = 1 (multiplicity 3), basis: {(0, 1, 1)}. 35. λ1 = 0 (multiplicity 2), basis: {(−1, 1, 0), (−1, 0, 1)}. λ2 = 3, basis: {(1, 1, 1)}. √ √ √ 36. λ1 = 2, basis: {(1, −2 2, 1)}. λ2 = 0, basis: {(1, 0, −1)}. λ3 = 7, basis: {( 2, 1, 2)}. 37. λ1 = −2, basis: {(2, 1, −4)}. λ2 = 3 (multiplicity 2), basis: {(0, 2, 1), (3, 11, 0)}. 38. λ1 = 0 (multiplicity 2), basis: {(0, 1, 0, −1), (1, 0, −1, 0)}. λ2 = 6, basis: {(1, 1, 1, 1)}. λ3 = −2, basis: {1, −1, 1, −1)}. 39. A has eigenvalues: 3 a+ 2 3 λ2 = a − 2 λ1 = 1 2 1 2 a2 + 8b2 , a2 + 8b2 , 407 λ3 = 0. Provided a = ±b, these eigenvalues are distinct, and therefore the matrix is nondefective. If a = b = 0, then the eigenvalue λ = 0 has multiplicity two. A basis for the corresponding eigenspace is {(−1, 0, 1), (−1, 1, 0)}. Since this is two-dimensional, the matrix is nondefective in this case. If a = −b = 0, then the eigenvalue λ = 0 once more has multiplicity two. A basis for the corresponding eigenspace is {(0, 1, 1), (1, 1, 0)}, therefore the matrix is nondefective in this case also. If a = b = 0, then A = 02 , so that λ = 0 (multiplicity three), and the corresponding eigenspace is all of R3 . Hence A is nondefective. 40. If a = b = 0, then A = 03 , which is nondefective. We now assume that at least one of either a or b is nonzero. A has eigenvalues: λ1 = b, λ2 = a − b, and λ3 = 3a + b. Provided a = 0, 2b, −b, the eigenvalues are distinct, and therefore A is nondefective. If a = 0, then the eigenvalue λ = b has multiplicity two. In this case, a basis for the corresponding eigenspace is {(1, 0, 1), (0, 1, 0)}, so that A is nondefective. If a = 2b, then the eigenvalue λ = b has multiplicity two. In this case, a basis for the corresponding eigenspace is {(−2, 1, 0), (−1, 0, 1)}, so that A is nondefective. If a = −b, then the eigenvalue λ = −2b has multiplicity two. In this case, a basis for the corresponding eigenspace is {(0, 1, 1), (1, 0, −1)}, so that A is nondefective. Solutions to Section 5.8 True-False Review: 1. TRUE. The terms “diagonalizable” and “nondefective” are synonymous. The diagonalizability of a matrix A hinges on the ability to form an invertible matrix S with a full set of linearly independent eigenvectors of the matrix as its columns. This, in turn, requires the original matrix to be nondefective. 2. TRUE. If we assume that A is diagonalizable, then there exists an invertible matrix S and a diagonal matrix D such that S −1 AS = D. Since A is invertible, we can take the inverse of each side of this equation to obtain D−1 = (S −1 AS )−1 = S −1 A−1 S, and since D−1 is still a diagonal matrix, this equation shows that A−1 is diagonalizable. 1 0 multiplicity 2). However, A and B are not similar. [Reason: If for some invertible matrix S , but since A = I2 , this would imply above.] 3. FALSE. For instance, the matrices A = I2 and B = 1 both have eigenvalue λ = 1 (with 1 A and B were similar, then S −1 AS = B that B = I2 , contrary to our choice of B 4. FALSE. An n × n matrix is diagonalizable if and only if it has n linearly independent eigenvectors. Besides, every matrix actually has inﬁnitely many eigenvectors, obtained by taking scalar multiples of a single eigenvector v. 5. TRUE. Assume A is an n × n matrix such that p(λ) = det(A − λI ) has no repeated roots. This implies that A has n distinct eigenvalues. Corresponding to each eigenvalue, we can select an eigenvector. Since eigenvectors corresponding to distinct eigenvalues are linearly independent, this yields n linearly independent eigenvectors for A. Therefore, A is nondefective, and hence, diagonalizable. 6. TRUE. Assuming that A is diagonalizable, then there exists an invertible matrix S and a diagonal matrix D such that S −1 AS = D. Therefore, D2 = (S −1 AS )2 = (S −1 AS )(S −1 AS ) = S −1 ASS −1 AS = S −1 A2 S. 408 Since D2 is still a diagonalizable matrix, this equation shows that A2 is diagonalizable. − 7. TRUE. Since In 1 AIn = A, A is similar to itself. 8. TRUE. The sum of the dimensions of the eigenspaces of such a matrix is even, and therefore not equal to n. This means we cannot obtain n linearly independent eigenvectors for A, and therefore, A is defective (and not diagonalizable). Problems: −1 − λ −2 = 0 ⇐⇒ λ2 − λ − 6 = 0 ⇐⇒ (λ − 3)(λ + 2) = 0 ⇐⇒ λ = 3 or −2 2−λ λ = −2. A is diagonalizable because it has two distinct eigenvalues. −4 −2 v1 0 If λ1 = 3 then (A − λI )v = 0 assumes the form = =⇒ 2v1 + v2 = 0. If we let −2 −1 v2 0 v1 = r ∈ R, then the solution set of this system is {(−r, 2r) : r ∈ R}, so the eigenvectors corresponding to λ1 = 3 are v1 = r(−1, 2) where r ∈ R. 1 −2 v1 0 = =⇒ v1 − 2v2 = 0. If we let If λ2 = −2 then (A − λI )v = 0 assumes the form −2 4 v2 0 v2 = s ∈ R, then the solution set of this system is {(2s, s) : s ∈ R}, so the eigenvectors corresponding to λ2 = −2 are v2 = s(2, 1) where s ∈ R. −1 2 Thus, the matrix S = satisﬁes S −1 AS = diag(3, −2). 21 1. det(A − λI ) = 0 ⇐⇒ 2. det(A − λI ) = 0 ⇐⇒ −7 − λ −4 4 1−λ = 0 ⇐⇒ λ2 + 6λ + 9 = 0 ⇐⇒ (λ + 3)2 = 0 ⇐⇒ λ = −3 of multiplicity two. −4 4 v1 0 = =⇒ v1 − v2 = 0. If we let −4 4 v2 0 v1 = r ∈ R, then the solution set of this system is {(r, r) : r ∈ R}, so the eigenvectors corresponding to λ = −3 are v = r(1, 1) where r ∈ R. A has only one linearly independent eigenvector, so by Theorem 5.8.4, A is not diagonalizable. If λ = −3 then (A − λI )v = 0 assumes the form 3. det(A − λI ) = 0 ⇐⇒ 1−λ 2 −8 −7 − λ = 0 ⇐⇒ λ2 + 6λ + 9 = 0 ⇐⇒ (λ + 3)2 = 0 ⇐⇒ λ = −3 of multiplicity two. 4 −8 v1 0 = =⇒ v1 − 2v2 = 0. If we let 2 −4 v2 0 v2 = r ∈ R, then the solution set of this system is {(2r, r) : r ∈ R}, so the eigenvectors corresponding to λ = −3 are v = r(2, 1) where r ∈ R. A has only one linearly independent eigenvector, so by Theorem 5.8.4, A is not diagonalizable. If λ = −3 then (A − λI )v = 0 assumes the form 4. det(A − λI ) = 0 ⇐⇒ −λ 4 −4 −λ = 0 ⇐⇒ λ2 + 16 = 0 ⇐⇒ λ = ±4i. A is diagonalizable because it has two distinct eigenvalues. 4i 4 v1 0 = =⇒ v1 − iv2 = 0. If we let −4 4i v2 0 v2 = r ∈ C, then the solution set of this system is {(ir, r) : r ∈ C}, so the eigenvectors corresponding to λ = −4i are v = r(i, 1) where r ∈ C. Since the entries of A are real, it follows from Theorem 5.6.8 that v2 = (−i, 1) is an eigenvector corresponding to λ = 4i. i −i Thus, the matrix S = satisﬁes S −1 AS = diag(−4i, 4i). 1 1 If λ = −4i then (A − λI )v = 0 assumes the form 409 5. det(A − λI ) = 0 ⇐⇒ 1−λ 0 1 0 3−λ 1 λ = 4. 0 7 −3 − λ = 0 ⇐⇒ (1 − λ)(λ + 4)(λ − 4) = 0 ⇐⇒ λ = 1, λ = −4 or 00 0 v1 0 7 v2 = 0 If λ = 1 then (A − λI )v = 0 assumes the form 0 2 1 1 −4 v3 0 =⇒ 2v1 − 15v3 = 0 and 2v2 + 7v3 = 0. If we let v3 = 2r where r ∈ R, then the solution set of this system is {(15r, −7r, 2r) : r ∈ R} so the eigenvectors corresponding to λ = 1 are v1 r(15, −7, 2) where r ∈ R. = 500 v1 0 If λ = −4 then (A − λI )v = 0 assumes the form 0 7 7 v2 = 0 111 v3 0 =⇒ v1 = 0 and v2 + v3 = 0. If we let v2 = s ∈ R, then the solution set of this system is {(0, s, −s) : s ∈ R} so the eigenvectors corresponding to λ = −4 are v2 = s(0, 1, −1) where s ∈ R. v1 −3 0 0 0 7 v2 = 0 If λ = 4 then (A − λI )v = 0 assumes the form 0 −1 1 1 −7 v3 0 =⇒ v1 = 0 and v2 − 7v3 = 0. If we let v3 = t ∈ R, then the solution set of this system is {(0, 7t, t) : t ∈ R} so the eigenvectors corresponding to λ = 4 are v3 = t(0, 7, 1) where t ∈ R. 15 0 0 1 satisﬁes S −1 AS = diag(1, 4, −4). Thus, the matrix S = −7 7 2 1 −1 6. det(A − λI ) = 0 ⇐⇒ 1−λ 2 2 −2 −3 − λ −2 0 0 −2 − λ = 0 ⇐⇒ (λ + 1)3 = 0 ⇐⇒ λ = −1 of multiplicity three. 2 −2 0 v1 0 If λ = −1 then (A − λI )v = 0 assumes the form 2 −2 0 v2 = 0 =⇒ v1 − v2 = 0 and 2 −2 0 v3 0 v3 ∈ R. If we let v2 = r ∈ R and v3 = s ∈ R, then the solution set of this system is {(r, r, s) : r, s ∈ R}, so the eigenvectors corresponding to λ = −1 are v1 = r(1, 1, 0) and v2 = s(0, 0, 1) where r, s ∈ R. A has only two linearly independent eigenvectors, so by Theorem 5.8.4, A is not diagonalizable. 7. det(A − λI ) = 0 ⇐⇒ two. −λ −2 −2 −2 −λ −2 −2 −2 −λ = 0 ⇐⇒ (λ − 2)2 (λ + 4) = 0 ⇐⇒ λ = −4 or λ = 2 of multiplicity 4 −2 −2 v1 0 4 −2 v2 = 0 If λ = −4 then (A − λI )v = 0 assumes the form −2 −2 −2 4 v3 0 =⇒ v1 − v3 = 0 and v2 − v3 = 0. If we let v3 = r ∈ R, then the solution set of this system is {(r, r, r) : r ∈ R} so the eigenvectors corresponding to λ = −4 are v1 = r(1, 1, 1) where r ∈ R. −2 −2 −2 v1 0 If λ = 2 then (A − λI )v = 0 assumes the form −2 −2 −2 v2 = 0 =⇒ v1 + v2 + v3 = 0. If −2 −2 −2 v3 0 we let v2 = s ∈ R and v3 = t ∈ R, then the solution set of this system is {(−s − t, s, t) : s, t ∈ R}, so two linearly independent eigenvectors corresponding to λ = 2 are v2 = s(−1, 1, 0) and v3 = t(−1, 0, 1). 1 −1 −1 0 1 satisﬁes S −1 AS = diag(−4, 2, 2). Thus, the matrix S = 1 1 1 0 410 −2 − λ −2 −2 4 4 = 0 ⇐⇒ λ2 (λ − 3) = 0 ⇐⇒ λ = 3, or λ = 0 of multiplicity 4−λ two. −5 14 v1 0 If λ = 3 then (A − λI )v = 0 assumes the form −2 −2 4 v2 = 0 −2 11 v3 0 =⇒ v1 − v3 = 0 and v2 − v3 = 0. If we let v3 = r ∈ R, then the solution set of this system is {(r, r, r) : r ∈ R}, so the eigenvectors corresponding to λ = 3 are 1 = r(1, 1, 1)where r ∈ R. v −2 1 4 v1 0 If λ = 0 then (A − λI )v = 0 assumes the form −2 1 4 v2 = 0 =⇒ −2v1 + v2 + 4v3 = 0. If −2 1 4 v3 0 we let v1 = s ∈ R and v3 = t ∈ R, then the solution set of this system is {(s, 2s − 4t, t) : s, t ∈ R}, so two linearly independent eigenvectors corresponding to λ = 0 are v2 = s(1, 2, 0) and v3 = t(0, −4, 1). 11 0 Thus, the matrix S = 1 2 −4 satisﬁes S −1 AS = diag(3, 0, 0). 10 1 8. det(A − λI ) = 0 ⇐⇒ 9. det(A − λI ) = 0 ⇐⇒ 2−λ 0 2 1 1−λ 1 0 1−λ −1 0 0 1−λ = 0 ⇐⇒ (λ − 2)(λ − 1)2 = 0 ⇐⇒ λ = 2 or λ = 1 of multiplicity two. v1 0 1 00 0 0 v2 = 0 =⇒ v1 = v2 = 0 and If λ = 1 then (A − λI )v = 0 assumes the form 0 0 2 −1 0 v3 v3 ∈ R. If we let v3 = r ∈ R then the solution set of this system is {(0, 0, r) : r ∈ R}, so there is only one corresponding linearly independent eigenvector. Hence, by Theorem 5.8.4, A is not diagonalizable. 10. det(A − λI ) = 0 ⇐⇒ 4−λ 3 0 11. det(A − λI ) = 0 ⇐⇒ −λ 2 −1 −2 −λ −2 1 2 −λ 0 −1 − λ 2 0 −1 = 0 ⇐⇒ (λ2 + 1)(λ − 4) = 0 ⇐⇒ λ = 4, or λ = ±i. 1−λ 0 0 0 v1 0 If λ = 4 then (A − λI )v = 0 assumes the form 3 −5 −1 v2 = 0 0 2 −3 v3 0 =⇒ 6v1 − 17v3 = 0 and 2v2 − 3v3 = 0. If we let v3 = 6r ∈ C, then the solution set of this system is {(17r, 9r, 6r) : r ∈ C}, so the eigenvectors corresponding to λ = 4 are v1 r(17, , 6) r ∈ C. = 9 where 4−i 0 0 v1 0 −1 − i −1 v2 = 0 =⇒ v1 = 0 and If λ = i then (A − λI )v = 0 assumes the form 3 0 2 1−i v3 0 2v2 + (1 − i)v3 = 0. If we let v3 = −2s ∈ C, then the solution set of this system is {(0, (1 − i)s, 2s) : s ∈ C}, so the eigenvectors corresponding to λ = i are v2 = s(0, 1 − i, 2) where s ∈ C. Since the entries of A are real, v3 = t(0, 1 + i, 2) where t ∈ C are the eigenvectors corresponding to λ = −i by Theorem 5.6.8. 17 0 0 Thus, the matrix S = 9 1 − i 1 + i satisﬁes S −1 AS = diag(4, i, −i). 6 2 2 = 0 ⇐⇒ λ(λ2 + 9) = 0 ⇐⇒ λ = 0, or λ = ±3i. 411 0 2 −1 v1 0 If λ = 0 then (A − λI )v = 0 assumes the form −2 0 −2 v2 = 0 12 0 v3 0 =⇒ v1 + v3 = 0 and 2v2 − v3 = 0. If we let v3 = 2r ∈ C, then the solution set of this system is {(−2r, r, 2r) : r ∈ C}, so the eigenvectors corresponding to λ = 0 are v1 = (−2, 1, 2) where r ∈ C. r 3i 2 −1 v1 0 If λ = −3i then (A − λI )v = 0 assumes the form −2 3i −2 v2 = 0 12 3i v3 0 =⇒ 5v1 +(−4+3i)v3 = 0 and 5v2 +(2+6i)v3 = 0. If we let v3 = 5s ∈ C, then the solution set of this system is {((4 − 3i)s, (−2 − 6i)s, 5s) : s ∈ C}, so the eigenvectors corresponding to λ = −3i are v2 = s(4 − 3i, −2 − 6i, 5) where s ∈ C. Since the entries of A are real, v3 = t(4 + 3i, −2 + 6i, 5) where t ∈ C are the eigenvectors corresponding to λ = 3i by Theorem 5.6.8. −2 4 + 3i 4 − 3i Thus, the matrix S = 1 −2 + 6i −2 − 6i satisﬁes S −1 AS = diag(0, 3i, −3i). 2 5 5 1−λ −2 0 −2 1−λ 0 0 0 = 0 ⇐⇒ (λ − 3)2 (λ + 1) = 0 ⇐⇒ λ = −1 or λ = 3 of 3−λ multiplicity two. v1 2 −2 0 0 2 0 v2 = 0 If λ = −1 then (A − λI )v = 0 assumes the form −2 0 04 v3 0 =⇒ 2v1 − v2 = 0 and v3 = 0. If we let v1 = r ∈ R, then the solution set of this system is {(r, r, 0) : r ∈ R} so the eigenvectors corresponding to λ = −1 are v1 = r(1, 1, 0) where r ∈ . R v1 −2 −2 0 0 If λ = 3 then (A − λI )v = 0 assumes the form −2 −2 0 v2 = 0 =⇒ v1 + v2 = 0 and 0 00 v3 0 v3 ∈ R. If we let v2 = s ∈ R and v3 = t ∈ R, then the solution set of this system is {(−s, s, t) : s, t ∈ R}, so the eigenvectors corresponding to λ 3 are v2 = s(−1, 1, 0) and v3 = t(0, 0, 1). = 1 −1 0 1 0 satisﬁes S −1 AS = diag(−1, 3, 3). Thus, the matrix S = 1 0 01 12. det(A − λI ) = 0 ⇐⇒ 13. λ1 = 2 (multiplicity 2), basis for eigenspace: {(−3, 1, 0), (3, 0, 1)}. λ2 = 1, basis for eigenspace: {(1, 2, 2)}. −3 3 1 Set S = 1 0 2 . Then S −1 AS = diag(2, 2, 1). 012 14. λ1 = 0 (multiplicity 2), basis for eigenspace: {(0, 1, 0, −1), (1, 0, −1, 0)}. λ2 = 2, basis for eigenspace: (1, 1, 1, 1)}. λ3 = 10, basis for eigenspace: {(−1, 1, −1, 1)}. { 0 1 1 −1 1 01 1 −1 Set S = 0 −1 1 −1 . Then S AS = diag(0, 0, 2, 10). −1 01 1 14 . 23 A has eigenvalues λ1 = −1, λ2 = 5 with corresponding linearly independent eigenvectors v1 = (−2, 1) and 15. The given system can be written as x = Ax, where A = 412 −2 1 , then S −1 AS = diag(−1, 5), therefore, under the transformation 11 x = S y, the given system of diﬀerential equations simpliﬁes to v2 = (1, 1). If we set S = y1 y2 = −1 0 0 5 y1 y2 . Hence, y1 = −y1 and y2 = 5y2 . Integrating these equations, we obtain y1 (t) = c1 e−t , y2 (t) = c2 e5t . Returning to the original variables, we have x = Sy = −2 1 c1 e−t c2 e5t 1 1 = −2c1 e−t + c2 e5t c1 e−t + c2 e5t . Consequently, x1 (t) = −2c1 e−t + c2 e5t and x2 (t) = c1 e−t + c2 e5t . 6 −2 . −2 6 A has eigenvalues λ1 = 4, λ2 = 8 with corresponding linearly independent eigenvectors v1 = (1, 1) and 1 −1 v2 = (−1, 1). If we set S = , then S −1 AS = diag(4, 8), therefore, under the transformation 1 1 x = S y, the given system of diﬀerential equations simpliﬁes to 16. The given system can be written as x = Ax, where A = y1 y2 = 4 0 0 8 y1 y2 . Hence, y1 = 4y1 and y2 = 8y2 . Integrating these equations, we obtain y1 (t) = c1 e4t , y2 (t) = c2 e8t . Returning to the original variables, we have x = Sy = c1 e4t c2 e8t 1 −1 1 1 = c1 e4t − c2 e8t c1 e4t + c2 e8t . Consequently, x1 (t) = c1 e4t − c2 e8t and x2 (t) = c1 e4t + c2 e8t . 9 6 . −10 −7 A has eigenvalues λ1 = −1, λ2 = 3 with corresponding linearly independent eigenvectors v1 = (3, −5) and 3 −1 v2 = (−1, 1). If we set S = , then S −1 AS = diag(−1, 3), therefore, under the transformation −5 1 x = S y, the given system of diﬀerential equations simpliﬁes to 17. The given system can be written as x = Ax, where A = y1 y2 = −1 0 0 3 y1 y2 . Hence, y1 = −y1 and y2 = 3y2 . Integrating these equations, we obtain y1 (t) = c1 e−t , y2 (t) = c2 e3t . 413 Returning to the original variables, we have x = Sy = c1 e−t c2 e3t 3 −1 −5 1 = 3c1 e−t − c2 e3t −5c1 e−t + c2 e3t . Consequently, x1 (t) = 3c1 e−t − c2 e3t and x2 (t) = −5c1 e−t + c2 e3t . −12 −7 . 16 10 A has eigenvalues λ1 = 2, λ2 = −4 with corresponding linearly independent eigenvectors v1 = (1, −2) and 1 7 v2 = (7, −8). If we set S = , then S 1 AS = diag(2, −4), therefore, under the transformation −2 −8 x = S y, the given system of diﬀerential equations simpliﬁes to 18. The given system can be written as x = Ax, where A = y1 y2 = 2 0 0 −4 y1 y2 . Hence, y1 = 2y1 and y2 = −4y2 . Integrating these equations, we obtain y1 (t) = c1 e2t , y2 (t) = c2 e−4t . Returning to the original variables, we have c1 e2t c2 e−4t 1 7 −2 −8 x = Sy = = c1 e2t + 7c2 e−4t −2c1 e2t − 8c2 e−4t . Consequently, x1 (t) = c1 e2t + 7c2 e−4t and x2 (t) = −2c1 e2t − 8c2 e−4t . 01 . −1 0 A has eigenvalues λ1 = i, λ2 = −i with corresponding linearly independent eigenvectors v1 = (1, i) and 1 1 v2 = (1, −i). If we set S = , then S −1 AS = diag(i, −i), therefore, under the transformation i −i x = S y, the given system of diﬀerential equations simpliﬁes to 19. The given system can be written as x = Ax, where A = y1 y2 = i 0 0 −i y1 y2 . Hence, y1 = iy1 and y2 = −iy2 . Integrating these equations, we obtain y1 (t) = c1 eit , y2 (t) = c2 e−it . Returning to the original variables, we have x = Sy = 1 1 i −i c1 eit c2 e−it = c1 eit + c2 e−it i(c1 eit − c2 e−it ) . Consequently, x1 (t) = c1 eit + c2 e−it and x2 (t) = i(c1 eit − c2 e−it ). Using Euler’s formula, these expressions can be written as x1 (t) = (c1 + c2 ) cos t + i(c1 − c2 ) sin t, x2 (t) = i(c1 − c2 ) cos t − (c1 + c2 ) sin t, 414 or equivalently, x1 (t) = a cos t + b sin t, x2 (t) = b cos t − a sin t, where a = c1 + c2 , and b = i(c1 − c2 ). 3 −4 −1 20. The given system can be written as x = Ax, where A = 0 −1 −1 . 0 −4 2 A has eigenvalue λ1 = −2 with corresponding eigenvector v1 = (1, 1, 1), and eigenvalue λ2 3 with corre= 11 0 1 , sponding linearly independent eigenvectors v2 = (1, 0, 0) and v3 = (0, 1, −4). If we set S = 1 0 1 0 −4 then S −1 AS = diag(−2, 3, 3), therefore, under the transformation x = S y, the given system of diﬀerential equations simpliﬁes to y1 −2 0 0 y1 y2 = 0 3 0 y2 . 003 y3 y3 Hence, y1 = −2y1 , y2 = 3y2 , and y3 = 3y3 . Integrating these equations, we obtain y1 (t) = c1 e−2t , y2 (t) = c2 e3t , y3 (t) = c3 e3t . Returning to the original variables, we 1 x = Sy = 1 1 have 1 0 c1 e−2t c1 e−2t + c2 e3t 0 1 c2 e3t = c1 e−2t + c3 e3t . 0 −4 c3 e3t c1 e−2t − 4c3 e3t Consequently, x1 (t) = c1 e−2t + c2 e3t , x2 (t) = c1 e−2t + c3 e3t , and c1 e−2t − 4c3 e3t . 1 1 −1 1 . 21. The given system can be written as x = Ax, where A = 1 1 −1 1 1 A has eigenvalue λ1 = −1 with corresponding eigenvector v1 = (−1, 1, −1), and eigenvalue λ2 = 2 with corre −1 0 1 0 , sponding linearly independent eigenvectors v2 = (0, 1, 1) and v3 = (1, 0, −1). If we set S = 1 1 −1 1 −1 then S −1 AS = diag(−1, 2, 2), therefore, under the transformation x = S y, the given system of diﬀerential equations simpliﬁes to y1 −1 0 0 y1 y2 = 0 2 0 y2 . 002 y3 y3 Hence, y1 = −y1 , y2 = 2y2 , and y3 = 2y3 . Integrating these equations, we obtain y1 (t) = c1 e−t , y2 (t) = c2 e2t , y3 (t) = c3 e2t . Returning to the original variables, −1 x = Sy = 1 −1 we have 0 1 c1 e−t −c1 e−t + c3 e2t . 1 0 c2 e2t = c1 e−t + c2 e2t 2t −t 2t 2t 1 −1 c3 e −c1 e + c2 e − c3 e 415 Consequently, x1 (t) = −c1 e−t + c3 e2t , x2 (t) = c1 e−t + c2 e2t , and −c1 e−t + (c2 − c3 )e2t . 22. A2 = (SDS −1 )(SDS −1 ) = SD(S −1 S )DS −1 = SDIn DS −1 = SD2 S −1 . We now use mathematical induction to establish the general result. Suppose that for k = m > 2 that Am = SDm S −1 . Then Am+1 = AAm = (SDS −1 )(SDm S −1 = SD(S −1 S )Dm S −1 ) = SDm+1 S −1 . It follows by mathematical induction that Ak = SDk S −1 for k = 1, 2, . . . 23. Let A = diag(a1 , a2 , . . . , an ) and let B = diag(b1 , b2 , . . . , bn ). Then from the index form of the matrix product, n 0, if i = j, (AB )ij = aik bkj = aii bij = ai bi , if i = j. k=1 Consequently, AB = diag(a1 b1 , a2 b2 , . . . , an bn ). Applying this result to the matrix D = diag(λ1 , λ2 , . . . , λk ), it follows directly that Dk = diag(λk , λk , . . . , λk ). n 1 2 24. The matrix A has eigenvalues λ1 = 5, λ2 = −1, with corresponding eigenvectors v1 = (1, −3) and 1 2 v2 = (2, −3). Thus, if we set S = , then S −1 AS = D, where D = diag(5, −1). −3 −3 Equivalently, A = SDS −1 . It follows from the results of the previous two examples that 1 2 −3 −3 A3 = SD3 S −1 = 125 0 0 −1 whereas A5 = SD5 S −1 = 1 2 −3 −3 3125 0 0 −1 2 −1 − 3 1 1 3 = 2 −1 − 3 1 1 3 −127 −84 378 251 , −3127 −2084 9378 6251 = . 25. (a) This is self-evident from matrix multiplication. Another perspective on this is that when we multiply a matrix B on the left by a diagonal matrix√ , the ith row of B gets multiplied by the ith√ D√ diagonal element √ of D. Thus, if we multiply the ith row of D, λi , by the ith diagonal element of D, λi , the result in √√ √ √√ the ith row of the product is λi λi = λi . Therefore, D D = D, which means that D is a square root of D. (b) We have √ √ √ √√ (S DS −1 )2 = (S DS −1 )(S DS −1 ) = S ( DI D)S −1 = SDS −1 = A, as required. (c) We begin by diagonalizing A. We have det(A − λI ) = det 6−λ −3 −2 7−λ = (6 − λ)(7 − λ) − 6 = λ2 − 13λ + 36 = (λ − 4)(λ − 9), so the eigenvalues of A are λ = 4 and λ = 9. An eigenvector of A corresponding to λ = 4 is eigenvector of A corresponding to λ = 9 is S= 2 −3 1 2 1 −3 . Thus, we can form and D= 4 0 0 9 . 1 1 , and an 416 √ 20 03 root of A is given by We take D= . A fast computation shows that S −1 = √ √ A = S DS −1 = 3/5 2/5 1/5 −1/5 12/5 −2/5 −3/5 13/5 . By part (b), one square . Directly squaring this result conﬁrms this matrix as a square root of A. 26. (a) Show: A ∼ A. The identity matrix, I , is invertible, and I = I −1 . Since IA = AI =⇒ A = I −1 AI , it follows that A ∼ A. (b) Show: A ∼ B ⇐⇒ B ∼ A. A ∼ B =⇒ there exists an invertible matrix S such that B = S −1 AS =⇒ A = SBS −1 = (S −1 )−1 BS −1 . But S −1 is invertible since S is invertible. Consequently, B ∼ A. (c) Show A ∼ B and B ∼ C ⇐⇒ A ∼ C . A ∼ B =⇒ there exists an invertible matrix S such that B = S −1 AS ; moreover, B ∼ C =⇒ there exists an invertible matrix P such that C = P −1 BP . Thus, C = P −1 BP = P −1 (S −1 AS )P = (P −1 S −1 )A(SP ) = (SP )−1 A(SP ) where SP is invertible. Therefore A ∼ C. 27. Let A ∼ B mean that A is similar to B . Show: A ∼ B ⇐⇒ AT ∼ B T . A ∼ B =⇒ there exists an invertible matrix S such that B = S −1 AS =⇒ B T = (S −1 AS )T = S T AT (S −1 )T = S T AT (S T )−1 . S T is invertible because S is invertible, [since det(S ) = det(S T )]. Thus, AT ∼ B T . 28. We are given that Av = λv and B = S −1 AS . B (S −1 v) = (S −1 AS )(S −1 v) = S −1 A(SS −1 )v = S −1 AI v = S −1 Av = S −1 (λv) = λ(S −1 v). Hence, S −1 v is an eigenvector of B corresponding to the eigenvalue λ. 29. (a) S −1 AS = diag(λ1 , λ2 , . . . , λn ) =⇒ det(S −1 AS ) = λ1 λ2 · · · λn =⇒ det(A) det(S −1 ) det(S ) = λ1 λ2 · · · λn =⇒ det(A) = λ1 λ2 · · · λn . Since all eigenvalues are nonzero, it follows that det(A) = 0. Consequently, A is invertible. (b) S −1 AS = diag(λ1 , λ2 , . . . , λn ) =⇒ [S −1 AS ]−1 = [diag(λ1 , λ2 , . . . , λn )]−1 1 11 , ,..., =⇒ S −1 A−1 (S −1 )−1 = diag λ1 λ 2 λn 11 1 =⇒ S −1 A−1 S = diag , ,..., . λ1 λ2 λn 30. (a) S −1 AS = diag(λ1 , λ2 , . . . , λn ) =⇒ (S −1 AS )T = [diag(λ1 , λ2 , . . . , λn )]T =⇒ S T AT (S −1 )T = diag(λ1 , λ2 , . . . , λn ) =⇒ S T AT (S T )−1 = diag(λ1 , λ2 , . . . , λn ). Since we have that Q = (ST )−1 , this implies that Q−1 AT Q = diag(λ1 , λ2 , . . . , λn ). (b) Let MC = [v1 , v2 , v3 , . . . , vn ] where MC denotes the matrix of cofactors of S . We see from part (a) that AT is nondefective, which means it possesses a complete set of eigenvectors. Also from part (a), Q−1 AT Q = diag(λ1 , λ2 , . . . , λn ) where Q = (S T )−1 , so AT Q = Q diag(λ1 , λ2 , . . . , λn ) (30.1) If we let MC denote the matrix of cofactors of S , then S −1 = T adj(S ) MC = =⇒ (S −1 )T = det(S ) det(S ) T MC det(S ) T =⇒ (S T )−1 = MC det(S ) =⇒ Q = MC . det(S ) 417 Substituting this result into Equation (30.1), we obtain MC MC AT = diag(λ1 , λ2 , . . . , λn ) det(S ) det(S ) =⇒ AT MC = MC diag(λ1 , λ2 , . . . , λn ) =⇒ AT [v1 , v2 , v3 , . . . , vn ] = [v1 , v2 , v3 , . . . , vn ] diag(λ1 , λ2 , . . . , λn ) =⇒ [AT v1 , AT v2 , AT v3 , . . . , AT vn ] = [λ1 v1 , λ2 v2 , λ3 v3 , . . . , λn vn ] =⇒ AT vi = λvi for i ∈ {1, 2, 3, . . . , n}. Hence, the column vectors of MC are linearly independent eigenvectors of AT . −2 − λ 4 = 0 ⇐⇒ (λ + 3)(λ − 2) = 0 ⇐⇒ λ1 = −3 or λ2 = 2. 1 1−λ If λ1 = −3 the corresponding eigenvectors are of the form v1 = r(−4, 1) where r ∈ R. If λ2 = 2 the corresponding eigenvectors are of the form v2 = s(1, 1) where s ∈ R. −4 1 Thus, a complete set of eigenvectors is {(−4, 1), (1, 1)} so that S = . If MC denotes the matrix 11 1 −1 of cofactors of S , then MC = . Consequently, from Problem 30, (1, −1) is an eigenvector −1 −4 corresponding to λ = −3 and (−1, −4) is an eigenvector corresponding to λ = 2 for the matrix AT . 31. det(A − λI ) = 0 ⇐⇒ λ1 0λ =⇒ [Av1 , Av2 ] = [λv1 , v1 + λv2 ] =⇒ Av1 = λv1 and Av2 = v1 + λv2 =⇒ (A − λI )v1 = 0 and (A − λI )v2 = v1 . 32. S −1 AS = Jλ ⇐⇒ AS = SJλ ⇐⇒ A[v1 , v2 ] = [v1 , v2 ] 2−λ 1 = 0 ⇐⇒ (λ − 3)2 = 0 ⇐⇒ λ1 = 3 of multiplicity two. −1 4−λ If λ1 = 3 the corresponding eigenvectors are of the v1 = r(1, 1) where r ∈ R. Consequently, A does not have a complete set of eigenvectors, so it is a defective matrix. 31 By the preceding problem, J3 = is similar to A. Hence, there exists S = [v1 , v2 ] such that 03 S −1 AS = J3 . From the ﬁrst part of the problem, we can let v1 = (1, 1). Now consider (A − λI )v2 = v1 where v1 = (a, b) for a, b ∈ R. Upon substituting, we obtain 33. det(A − λI ) = 0 ⇐⇒ −1 −1 Thus, S takes the form S = 34. λ S −1 AS = 0 0 1 λ 0 1 b−1 1 b 1 1 a b = 1 1 =⇒ −a + b = 1. where b ∈ R, and if b = 0, then S = 0 λ 1 ⇐⇒ A[v1 , v2 , v3 ] = [v1 , v2 , v3 ] 0 λ 0 1 −1 1 0 0 1 λ 1 λ 0 ⇐⇒ [Av1 , Av2 , Av3 ] = [λv1 , v1 + λv2 , v2 + λv3 ] ⇐⇒ Av1 = λv1 , Av2 = v1 + λv2 , and Av3 = v2 + λv3 ⇐⇒ (A − λI )v1 = 0, (A − λI )v2 = v1 , and (A − λI )v3 = v2 . n 35. (a) From (5.8.15), c1 f1 + c2 f2 + · · · + cn fn = 0 ⇐⇒ n ci i=1 sji ej = 0 j =1 . 418 n n ⇐⇒ n sji ci ej = 0 ⇐⇒ j =1 i=1 sji ci = 0, j = 1, 2, . . . , n, since {ei } is a linearly independent set. The i=1 latter equation is just the component form of the linear system S c = 0. Since {fi } is a linearly independent set, the only solution to this system is the trivial solution c = 0, so det(S ) = 0. Consequently, S is invertible. (b) From (5.8.14) and (5.8.15), we have n T (fk ) = n n bik n j =1 i=1 sji ej = i=1 j =1 sji bik ej , k = 1, 2, . . . , n. Replacing i with j and j with i yields n T (fk ) = n sij bjk ei , k = 1, 2, . . . , n. i=1 (∗) j =1 (c) From (5.8.15) and (5.8.13), we have n n T (fk ) = sjk T (ej ) = j =1 n sjk j =1 aij ei , i=1 that is, n T (fk ) = aij sjk ei , k = 1, 2, . . . , n. i=1 n (∗∗) j =1 (d) Subtracting (∗) from (∗∗) yields n (sij bjk − aij sjk ) ei = 0. i=1 n j =1 Thus, since {e1 } is a linearly independent set, n n sij bjk = j =1 aij sjk , i = 1, 2, . . . , n. j =1 But this is just the index form of the matrix equation SB = AS . Multiplying both sides of the preceding equation on the left by S −1 yields B = S −1 AS . Solutions to Section 5.9 True-False Review: 1. TRUE. In the deﬁnition of the matrix exponential function eAt , we see that powers of the matrix A must be computed: (At)2 (At)3 eAt = In + (At) + + + .... 2! 3! In order to do this, A must be a square matrix. 419 2. TRUE. We see this by plugging in t = 1 into the deﬁnition of the matrix exponential function. All terms containing A3 , A4 , A5 , . . . must be zero, leaving us with the result given in this statement. 3. FALSE. The inverse of the matrix exponential function eAt is the matrix exponential function e−At , and this will exist for all square matrices A, not just invertible ones. 4. TRUE. The matrix exponential function eAt converges to a matrix the same size as A, for all t ∈ R. This is asserted, but not proven, directly beneath Deﬁnition 5.9.1. 5. FALSE. The correct statement is (SDS −1 )k = SDk S −1 . The matrices S and S −1 on the right-hand side of this equation do not get raised to the power k . 6. FALSE. According to Property 1 of the Matrix Exponential Function, we have (eAt )2 = (eAt )(eAt ) = e2At . Problems: 1. Note that eAt = I + At + 10 0 1 = . . . . .. 0 1 1 (At)2 + · · · + (At)k + . . . 2! k! d1 t 0 . . . ... 0 . . . 0 0 d2 t . . . . . + . .. .. . . . . .. . . 0 ... 0 1 ∞ 1 k k=0 k! (d1 t) 0 ∞ 1 k k=0 k! (d2 t) 0 . . . 0 = = ed1 t 0 . . . 0 ed2 t . . . ... ... .. . 0 ... . . . 0 0 0 . . . 0 0 edn t ... 0 0 . . . 1 + ··· + k! dn t ... ... .. . 0 (d2 t)k . . . ... ... .. . 0 0 . . . 0 ... (dn t)k 0 0 . . . ∞ 1 k k=1 k! (dn t) ... (d1 t)k 0 . . . 0 = diag(ed1 t , ed2 t , . . . , edn t ). 2. Using Problem 1, we have e−3t 0 0 e5t eλt 0 . . . 0 eλt . . . ... ... .. . 0 0 . . . 0 0 ... eλt eAt = and e−At = e3t 0 0 e−5t . 3. We have eλIn t = = eλt 1 0 ... 0 1 ... . . .. .. . .. 0 0 ... 0 0 . . . 1 = eλt In . 420 4. (a) We have a0 0a BC = 0 0 (b) We have C 2 = b 0 0 0 0 0 b 0 b 0 0 0 = 0 ab 00 = 0 0 1 0 eCt = I + Ct = 0 0 = b 0 a0 0a = CB. = 02 . Therefore, 0 1 0 0 + b 0 1 bt 01 t= . (c) Using the results in the ﬁrst two parts of this problem, it follows that eAt = e(B +C )t = eBt eCt = eat 0 0 eat 1 bt 01 eat 0 = btebt eat . a0 0b 0 ab and C = , so that A = B + C . Now BC = 0a −b 0 −ab 0 Property (1) of the matrix exponential function implies that eAt = e(B +C )t = eBt eCt . Now 5. Deﬁne B = eat 0 eBt = 0 eat = eat 1 0 0 1 b4 0 0 b4 , by Problem 1. To determine eCt , observe ﬁrst that −b2 0 0 −b2 C2 = 0 b3 −b3 0 , C3 = , C4 = , C5 = 0 b5 −b5 0 ,... In general, we have C 2 n = b2 n (−1)n 0 0 (−1)n and C 2n+1 = b2n+1 0 (−1)n+1 (−1)n 0 cos bt − sin bt sin bt cos bt for each integer n ≥ 0. Therefore, from Deﬁnition 5.9.1, we have Ct e = − ∞ (−1)n 2n n=0 (2n)! b ∞ (−1)n 2n+1 n=0 (2n+1)! b ∞ (−1)n 2n+1 n=0 (2n+1)! b ∞ (−1)n 2n n=0 (2n)! b = . Thus, eAt = eBt eCt = eat cos bt − sin bt sin bt cos bt . 6. We have det(A − λI ) = 0 ⇐⇒ 1−λ 0 Eigenvalue λ = 1: We have A − I = 0 0 2 = 0 ⇐⇒ λ2 − 4λ + 3 = 0 ⇐⇒ λ = 1 or λ = 3. 3−λ 2 2 , so we can choose the eigenvector v1 = 1 0 . = CB , so 421 −2 0 Eigenvalue λ = 3: We have A − 3I = We form the matrices S = 1 0 1 1 eAt = SeDt S −1 = 2 0 1 0 and D = 1 0 1 1 , so we can choose the eigenvector v2 = et 0 1 1 0 3 . . From Theorem 5.9.3, we have 1 −1 0 1 0 e3t et 0 = e3t − et e3t . 7. We have 3−λ 1 det(A − λI ) = 0 ⇐⇒ 1 = 0 ⇐⇒ λ2 − 6λ + 8 = 0 ⇐⇒ λ = 2 or λ = 4. 3−λ Eigenvalue λ = 4: We have A − 4I = −1 1 1 1 Eigenvalue λ = 2: We have A − 2I = 1 1 We form the matrices S = eAt = SeDt S −1 = 1 1 1 1 1 −1 1 1 1 −1 8. We have det(A − λI ) = 0 ⇐⇒ Eigenvalue λ = 2i: We have A − 2iI = 1 −1 , so we can choose the eigenvector v2 = 4 0 and D = e4t 0 1 1 , so we can choose the eigenvector v1 = 0 e2t 0 2 . . . From Theorem 5.9.3, we have 1/2 1/2 1/2 −1/2 = 1 4t 2t 2 (e + e 1 4t 2t 2 (e − e ) 1 4t 2 (e 1 4t 2 (e − e2t ) + e2t ) . 1 i . −λ 2 = 0 ⇐⇒ λ2 + 4 = 0 ⇐⇒ λ = ±2i. −2 −λ −2i 2 −2 −2i , so we can choose the eigenvector v1 = Eigenvalue λ = −2i: By taking the conjugate of the eigenvector obtained above, we can choose v2 = We form the matrices S = 1 1 i −i eAt = SeDt S −1 = 1 1 i −i and D = e2it 0 2i 0 0 −2i 0 e−2it 1 −i . . From Theorem 5.9.3, we have 1/2 −i/2 1/2 i/2 = cos 2t − sin 2t sin 2t cos 2t . 1 i The simpliﬁcation in the last step uses the well-known identities cos x = 2 (eix + e−ix ) and sin x = − 2 (eix − −ix e ). 9. We have det(A − λI ) = 0 ⇐⇒ −1 − λ −3 3 = 0 ⇐⇒ λ2 + 2λ + 10 = 0 ⇐⇒ λ = −1 ± 3i. −1 − λ Eigenvalue λ = −1 + 3i: We have A − (−1 + 3i)I = 1 i . −3i 3 −3 −3i , so we can choose the eigenvector v1 = 422 Eigenvalue λ = −1 − 3i: By taking the conjugate of the eigenvector obtained above, we can choose v2 = 1 . −i 11 i −i We form the matrices S = e(−1+3i)t 0 1 1 i −i eAt = SeDt S −1 = and D = −1 + 3i 0 0 −1 − 3i 0 e(−1−3i)t 1/2 −i/2 1/2 i/2 . From Theorem 5.9.3, we have = e−t cos 3t − sin 3t sin 3t cos 3t . i The simpliﬁcation in the last step uses the well-known identities cos x = 1 (eix + e−ix ) and sin x = − 2 (eix − 2 −ix e ). Alternative Solution: Apply Problem 5 directly. 10. Observe that we are repeating the result of Problem 5 here. This problem gives a traditional solution. We have det(A − λI ) = 0 ⇐⇒ a−λ −b b = 0 ⇐⇒ λ2 − 2aλ + a2 + b2 = 0 ⇐⇒ λ = a ± bi. a−λ Eigenvalue λ = a + bi: We have A − (a + bi)I = −bi b −b −bi , so we can choose the eigenvector v1 = 1 i . Eigenvalue λ = a − bi: By taking the conjugate of the eigenvector obtained above, we can choose v2 = 1 . −i We form the matrices S = eAt = SeDt S −1 = 11 i −i 1 1 i −i and D = e(a+bi)t 0 a + bi 0 0 a − bi 0 e(a−bi)t . From Theorem 5.9.3, we have 1/2 −i/2 1/2 i/2 = eat cos bt − sin bt sin bt cos bt . i The simpliﬁcation in the last step uses the well-known identities cos x = 1 (eix + e−ix ) and sin x = − 2 (eix − 2 −ix e ). 11. We have det(A − λI ) = 0 ⇐⇒ 3−λ 1 0 −2 −λ 0 −2 −2 = 0 ⇐⇒ (3 − λ)(λ2 − 3λ + 2) = 0 ⇐⇒ λ = 1 or λ = 2 or λ = 3. 3−λ 2 −2 −2 1 Eigenvalue λ = 1: We have A − I = 1 −1 −2 , and we may choose the vector v1 = 1 in 0 0 2 0 nullspace(A − I ) as an eigenvector corresponding to λ = 1. 1 −2 −2 2 Eigenvalue λ = 2: We have A − 2I = 1 −2 −2 , and we may choose the vector v2 = 1 in 0 0 1 0 nullspace(A − 2I ) as an eigenvector corresponding to λ = 2. 423 Eigenvalue λ = 3: We have A − 3I nullspace(A − 3I ) as an eigenvector 1 We form the matrices S = 1 0 0 −2 −2 1 = 1 −3 −2 , and we may choose the vector v3 = 1 in 0 0 0 −1 corresponding to λ = 3. 2 1 100 1 1 and D = 0 2 0 . From Theorem 5.9.3, we have 0 −1 003 eAt = SeDt S −1 12 1 100 −1 2 1 1 0 2 0 1 −1 0 = 1 1 0 0 −1 003 0 0 −1 t t tt t 2t e (2e − 1) −2e (e − 1) −e (e − 1) = et (et − 1) −et (et − 2) −et (e2t − 1) . 0 0 e3t 12. We have det(A − λI ) = 0 ⇐⇒ 6−λ 8 4 −2 −2 − λ −2 −1 −2 = 0 ⇐⇒ (λ − 2)2 (λ − 1) = 0 ⇐⇒ λ = 2 or λ = 1. 1−λ 1 4 −2 −1 Eigenvalue λ = 2: We have A − 2I = 8 −4 −2 , and we may choose two vectors, v1 = 2 and 0 4 −2 −1 1 v2 = 0 in nullspace(A − 2I ) as an eigenvector corresponding to λ = 2. 4 5 −2 −1 Eigenvalue λ = 1: We have A − I = 8 −3 −2 . The reduced row-echelon form of this matrix is 4 −2 0 1 0 −1 1 0 1 −2 , so we make choose v3 = 2 in nullspace(A − I ) as an eigenvector corresponding to 00 0 1 λ = 1. 111 200 We form the matrices S = 2 0 2 and D = 0 2 0 . From Theorem 5.9.3, we have 041 001 eAt = SeDt S −1 2t 111 e 0 0 4 −3/2 −1 0 = 2 0 2 0 e2t 0 1 −1/2 t 041 0 0e −4 2 1 t t t t t t e (5e − 4) 2e (1 − e ) e (1 − e ) = 8et (et − 1) et (4 − 3et ) 2et (1 − et ) . 4et (et − 1) 2et (1 − et ) et 424 13. Direct computation shows that A2 = 02 . Therefore, eAt = I + At = 1 0 0 1 + −3 −1 9 3 −3t + 1 9t −t 3t + 1 t= . 14. Direct computation shows that A2 = 02 . Therefore, eAt = I + At = 1 0 0 1 1 1 −1 −1 + t= 1+t −t t 1−t . 15. Direct computation shows eAt 1 = 0 0 0 1 0 000 that A2 = 0 0 0 and 100 0 000 0 0 + 1 0 0 t + 0 1 010 1 16. Direct computation shows that −4 8 2 A2 = −1 2 −4 −1 −6 0 0 + 0 −2 1 2 1 10 eAt = 0 1 00 1 − t − 2t2 −t2 /2 = t + t2 −6t + 4t2 1 − 2t + t2 2t − 2t2 0 0 2 17. Direct computation shows that A = 0 0 (At)2 (At)3 eAt = I4 + At + + 2 6 1000 0 0 1 0 0 0 = 0 0 1 0 + 0 0000 0 2 3 1 t t /2 t /6 0 1 t t 2 /2 . = 0 0 1 t 00 0 1 0 0 0 0 1 0 0 0 A3 = 03 . Therefore, 0 0 0 1 00 0 t2 1 0 . 0 = t 2 t2 /2 t 1 0 −4 −1 and A3 = 03 . Therefore, 2 −5 −4 8 −4 t2 −1 t + −1 2 −1 2 3 2 −4 2 −5t − 2t2 −t − t2 /2 . 1 + 3t + t2 0 0 1 3 0 , A = 0 0 0 0 t00 0 t 0 + 0 0 t 000 0 0 0 0 0 0 0 0 0 0 0 0 1 0 , and A4 = 04 . Therefore, 0 0 0 t2 /2 0 0 0 0 t 3 /6 2 0 0 0 0 0 t /2 0 + 0 0 0 0 0 0 0 0 0 0 000 0 18. It is easy to see (and can be formally veriﬁed by induction on k ) that the matrix Ak contains all zero entries except for the (k + i, i) entry for 1 ≤ i ≤ n − k . In particular, An = An+1 = An+2 = · · · = 0n . 425 Therefore, by Deﬁnition 5.9.1, we have eAt = In + At + 1 t t2 2! = t3 3! . . . tn−1 (n−1)! (At)2 (At)3 (At)n−1 + + ··· + 2! 3! (n − 1)! 0 0 0 ... 0 1 0 0 ... 0 t 1 0 ... 0 t2 t 1 ... 0 2! . . . . .. . . . .. . . . . tn−2 tn−3 tn−4 ... 1 (n−2)! (n−3)! (n−4)! . 19. Assume that A0 is m × m and B0 is n × n. Note that these two matrices must be square in order for Ak 0 0 eA0 t and eB0 t to make sense. The key point is that Ak = . Therefore, k 0 B0 (At)2 (At)3 + + ... 2! 3! 0 A0 0 + t+ In 0 B0 eAt = I + At + = = Im 0 Im + A0 t + + (A0 t)3 3! e 0 0 eB0 t 0 2 B0 t2 + 2! + ... 0 A0 t = (A0 t)2 2! A2 0 0 A3 0 0 t3 + ... 3! 0 3 B0 0 In + B0 t + (B0 t)2 2! + (B0 t)3 3! + ... . Solutions to Section 5.10 True-False Review: 1. TRUE. This is immediate from Deﬁnition 5.10.1. 2. TRUE. The roots of p(λ) = λ3 + λ are λ = 0 and λ = ±i. Therefore, the matrix has complex eigenvalues. But Theorem 5.10.4 indicates that a real matrix that is symmetric must have only real eigenvalues. 3. FALSE. The zero matrix is a real, symmetric matrix for which both v1 and v2 are eigenvectors. 4. TRUE. This is the statement of the Principal Axes Theorem (Theorem 5.10.6). 5. TRUE. This is a direct application of Lemma 5.10.9. 6. TRUE. Assuming that A and B are orthogonal matrices, then A and B are invertible, and hence AB is invertible. Moreover, (AB )−1 = B −1 A−1 = B T AT = (AB )T , so that AB meets the requirements of orthogonality spelled out in Deﬁnition 5.10.1. 7. TRUE. This is essentially the content of Theorem 5.10.3, since a set of orthogonal unit vectors are precisely a set of orthonormal vectors. 8. TRUE. Let S be the matrix consisting of a complete set of orthonormal eigenvectors of A. Then S T AS = diag(λ1 , λ2 , . . . , λn ) = D, where the λi are the corresponding eigenvalues. Then S is an orthogonal matrix: S T = S −1 . Hence, A = SDS T and AT = (SDS T )T = SDT S T = SDS T = A, so A is symmetric. 426 Problems: 1. det(A − λI ) = 0 ⇐⇒ 2−λ 2 2 −1 − λ = 0 ⇐⇒ λ2 − λ − 6 = 0 ⇐⇒ (λ + 2)(λ − 3) = 0 ⇐⇒ λ = −2 or λ = 3. 4 2 2 1 v1 0 = =⇒ 2v1 + v2 = 0. v1 = (1, −2), is v2 0 2 1 v1 is a unit eigenvector corresponding = √ , −√ an eigenvector corresponding to λ = −2 and w1 = ||v1 || 5 5 to λ = −2. −1 2 v1 0 If λ = 3 then (A − λI )v = 0 assumes the form = =⇒ v1 + 2v2 = 0. v2 = (2, 1), 2 −4 v2 0 2 1 v2 = √ ,√ is a unit eigenvector corresponding is an eigenvector corresponding to λ = 3 and w2 = ||v2 || 5 5 to λ = 3. 2 1 √ √ 5 5 T Thus, S = 2 1 and S AS = diag(−2, 3). √ −√ 5 5 If λ = −2 then (A − λI )v = 0 assumes the form 2. det(A − λI ) = 0 ⇐⇒ 4−λ 6 6 9−λ = 0 ⇐⇒ λ2 − 13λ = 0 ⇐⇒ λ(λ − 13) = 0 ⇐⇒ λ = 0 or λ = 13. 46 v1 0 = =⇒ 2v1 + 3v2 = 0. v1 = (−3, 2), is 69 v2 0 v1 3 2 an eigenvector corresponding to λ = 0 and w1 = = −√ , √ is a unit eigenvector corresponding ||v1 || 13 13 to λ = 0. −9 6 v1 0 If λ = 13 then (A − λI )v = 0 assumes the form = =⇒ −3v1 + 2v2 = 0. 6 −4 v2 0 2 3 v2 = √ ,√ is a unit eigenvector v2 = (2, 3), is an eigenvector corresponding to λ = 13 and w2 = ||v2 || 13 13 corresponding to λ = 13. 2 3 √ −√ 13 13 T Thus, S = 2 3 and S AS = diag(0, 13). √ √ 13 13 If λ = 0 then (A − λI )v = 0 assumes the form 3. det(A − λI ) = 0 ⇐⇒ 1−λ 2 2 1−λ = 0 ⇐⇒ (λ − 1)2 − 4 = 0 ⇐⇒ (λ + 1)(λ − 3) = 0 ⇐⇒ λ = −1 or λ = 3. 22 v1 0 = =⇒ v1 + v2 = 0. v1 = (−1, 1), is 22 v2 0 v1 1 1 an eigenvector corresponding to λ = −1 and w1 = = −√ , √ is a unit eigenvector corresponding ||v1 || 2 2 to λ = −1. −2 2 v1 0 If λ = 3 then (A − λI )v = 0 assumes the form = =⇒ v1 − v2 = 0. v2 = (1, 1), is 2 −2 v2 0 1 1 v2 an eigenvector corresponding to λ = 3 and w2 = = √ ,√ is a unit eigenvector corresponding to ||v2 || 2 2 λ = 3. If λ = −1 then (A − λI )v = 0 assumes the form 427 1 −√ 2 Thus, S = 1 √ 2 1 √ 2 T 1 and S AS = diag(−1, 3). √ 2 4. det(A − λI ) = 0 ⇐⇒ −λ 0 0 −2 − λ 3 0 3 0 −λ = 0 ⇐⇒ (λ + 2)(λ − 3)(λ + 3) = 0 ⇐⇒ λ = −3, λ = −2, or λ = 3. 3 v1 0 0 v2 = 0 =⇒ v1 + v3 = 0 and v2 = 0. 3 v3 0 1 1 v1 = − √ , 0, √ is a unit v1 = (−1, 0, 1), is an eigenvector corresponding to λ = −3 and w1 = ||v1 || 2 2 eigenvector corresponding to λ = −3. v1 0 203 If λ = −2 then (A − λI )v = 0 assumes the form 0 0 0 v2 = 0 =⇒ v1 = v3 = 0 and v2 ∈ R. 302 v3 0 v2 v2 = (0, 1, 0), is an eigenvector corresponding to λ = −2 and w2 = = (0, 1, 0) is a unit eigenvector ||v2 || corresponding to λ = −2. −3 0 3 v1 0 0 v2 = 0 =⇒ v1 − v3 = 0 and If λ = 3 then (A − λI )v = 0 assumes the form 0 −5 3 0 −3 v3 0 1 1 v3 = √ , 0, √ is a unit v2 = 0. v3 = (1, 0, 1), is an eigenvector corresponding to λ = 3 and w3 = ||v3 || 2 2 eigenvector corresponding to λ = 3. 1 1 −√ 0√ 2 2 01 0 and S T AS = diag(−3, −2, 3). Thus, S = 1 1 √ 0√ 2 2 3 If λ = −3 then (A − λI )v = 0 assumes the form 0 3 5. det(A − λI ) = 0 ⇐⇒ 1−λ 2 1 2 4−λ 2 1 2 1−λ 0 1 0 = 0 ⇐⇒ λ3 − 6λ2 = 0 ⇐⇒ λ2 (λ − 6) = 0 ⇐⇒ λ = 6 or λ = 0 of multiplicity two. 121 v1 0 If λ = 0 then (A − λI )v = 0 assumes the form 2 4 2 v2 = 0 =⇒ v1 + 2v2 + v3 = 0. 121 v3 0 v1 = (−1, 0, 1) and v2 = (−2, 1, 0) are linearly independent eigenvectors corresponding to λ = 0. v1 and v2 are not orthogonal since v1 , v2 = 2 = 0, so we will use the Gram-Schmidt procedure. Let u1 = v1 = (−1, 0, 1), so u2 = v2 − v2 , u1 2 u1 = (−2, 1, 0) − (−1, 0, 1) = (−1, 1, −1). ||u1 ||2 2 u1 1 1 = − √ , 0, √ ||u1 || 2 2 corresponding to λ = 0. Now w1 = and w2 = u2 = ||u2 || 1 1 1 −√ , √ , √ 3 3 3 are orthonormal eigenvectors 428 −5 2 1 v1 0 2 v2 = 0 =⇒ v1 − v3 = 0 and If λ = 6 then (A − λI )v = 0 assumes the form 2 −2 1 2 −5 v3 0 1 2 1 v3 = √ ,√ ,√ is v2 − 2v3 = 0. v3 = (1, 2, 1), is an eigenvector corresponding to λ = 6 and w3 = ||v3 || 6 6 6 a unit eigenvector corresponding to λ = 6. 1 1 1 √ −√ −√ 2 3 6 1 2 √ √ and S T AS = diag(0, 0, 6). 0 Thus, S = 3 6 1 1 1 √ √ −√ 2 3 6 6. det(A − λI ) = 0 ⇐⇒ 2−λ 0 0 0 3−λ 1 0 1 3−λ = 0 ⇐⇒ (λ − 2)2 (λ − 4) = 0 ⇐⇒ λ = 4 or λ = 2 of multiplicity two. 000 v1 0 If λ = 2 then (A − λI )v = 0 assumes the form 0 1 1 v2 = 0 =⇒ v2 + v3 = 0 and 011 v3 0 v1 ∈ R. v1 = (1, 0, 0) and v2 = (0, −1, 1) are linearly independent eigenvectors corresponding to λ = 2, and 1 v2 1 v1 = (1, 0, 0) and w2 = = 0, − √ , √ are unit eigenvectors corresponding to λ = 2. w1 w1 = ||v1 || ||v2 || 2 2 and w2 are also orthogonal because w1 , w2 = 0. −2 0 0 v1 0 1 v2 = 0 =⇒ v1 = 0 and If λ = 4 then (A − λI )v = 0 assumes the form 0 −1 0 1 −1 v3 0 1 1 v3 = 0, √ , √ is a v2 − v3 = 0. v3 = (0, 1, 1), is an eigenvector corresponding to λ = 4 and w3 = ||v3 || 2 2 unit eigenvector corresponding to λ = 4. 1 0 0 1 1 √ 0 −√ T Thus, S = 2 2 and S AS = diag(2, 2, 4). 1 1 √ √ 0 2 2 7. det(A − λI ) = 0 ⇐⇒ −λ 1 0 1 −λ 0 0 0 1−λ = 0 ⇐⇒ (λ − 1)2 (λ + 1) = 0 ⇐⇒ λ = −1 or λ = 1 of multiplicity two. −1 10 v1 0 If λ = 1 then (A − λI )v = 0 assumes the form 1 −1 0 v2 = 0 =⇒ v1 − v2 = 0 and 0 00 v3 0 v3 ∈ R. v1 = (1, 1, 0) and v2 = (0, 0, 1) are linearly independent eigenvectors corresponding to λ = 1, and 1 1 v2 v1 = √ , √ , 0 and w2 = = (0, 0, 1) are unit eigenvectors corresponding to λ = 1. w1 w1 = ||v1 || ||v2 || 2 2 and w2 are also orthogonal because w1 , w2 = 0. 110 v1 0 If λ = −1 then (A − λI )v = 0 assumes the form 1 1 0 v2 = 0 =⇒ v1 + v2 = 0 and v3 = 0. 002 v3 0 429 v3 = (−1, 1, 0), is an eigenvector corresponding to λ = −1 and w3 = eigenvector corresponding to 1 1 −√ 0√ 2 2 1 1 Thus, S = √ 0√ 2 2 01 0 8. det(A − λI ) = 0 ⇐⇒ v3 = ||v3 || 1 1 −√ , √ , 0 2 2 is a unit λ = −1. and S T AS = diag(−1, 1, 1). 1−λ 1 −1 1 −1 1−λ 1 1 1−λ λ = −1. = 0 ⇐⇒ (λ − 2)2 (λ + 1) = 0 ⇐⇒ λ = 2 of multiplicity two −1 1 −1 v1 0 1 v2 = 0 =⇒ v1 − v2 + v3 = 0. If λ = 2 then (A − λI )v = 0 assumes the form 1 −1 −1 1 −1 v3 0 v1 = (1, 1, 0) and v2 = (−1, 0, 1) are linearly independent eigenvectors corresponding to λ = 2. v1 and v2 are not orthogonal since v1 , v2 = −1 = 0, so we will use the Gram-Schmidt procedure. Let u1 = v1 = (1, 1, 0), so u2 = v2 − 1 u1 1 = √ , √ ,0 ||u1 || 2 2 corresponding to λ = 2. 1 v2 , u1 u1 = (−1, 0, 1) + (1, 1, 0) = ||u1 ||2 2 11 − , ,1 . 22 1 u2 1 2 = −√ , √ , √ are ||u2 || 6 6 6 v1 2 1 −1 1 v2 = If λ = −1 then (A − λI )v = 0 assumes the form 1 2 −1 1 2 v3 Now w1 = and w2 = orthonormal eigenvectors 0 0 =⇒ v1 − v3 = 0 and 0 1 1 1 v3 v2 + v3 = 0. v3 = (1, −1, 1), is an eigenvector corresponding to λ = −1 and w3 = = √ , −√ , √ ||v3 || 3 3 3 is a unit eigenvector corresponding to λ = −1. 1 1 1 √ √ −√ 2 6 3 1 1 1 √ √ − √ and S T AS = diag(2, 2, −1). Thus, S = 2 6 3 2 1 √ √ 0 6 3 9. det(A − λI ) = 0 ⇐⇒ λ = 2. 1−λ 0 −1 0 1−λ 1 −1 1 −λ = 0 ⇐⇒ (1 − λ)(λ + 1)(λ − 2) = 0 ⇐⇒ λ = −1, λ = 1, or 0 −1 v1 0 2 1 v2 = 0 =⇒ 2v1 − v3 = 0 and 1 1 v3 0 v1 1 1 2 2v2 + v3 = 0. v1 = (1, −1, 2), is an eigenvector corresponding to λ = −1 and w1 = = √ , −√ , √ ||v1 || 6 6 6 is a unit eigenvector corresponding to λ = −1. 2 If λ = −1 then (A − λI )v = 0 assumes the form 0 −1 430 0 −1 v1 0 0 1 v2 = 0 =⇒ v1 − v2 = 0 and 1 −1 v3 0 1 1 v2 = √ , √ , 0 is a unit v3 = 0. v2 = (1, 1, 0), is an eigenvector corresponding to λ = 1 and w2 = ||v2 || 2 2 eigenvector corresponding to λ = 1. −1 0 −1 v1 0 1 v2 = 0 =⇒ v1 + v3 = 0 and If λ = 2 then (A − λI )v = 0 assumes the form 0 −1 −1 1 −2 v3 0 1 1 1 v3 = −√ , √ , √ v2 − v3 = 0. v3 = (−1, 1, 1), is an eigenvector corresponding to λ = 2 and w3 = ||v3 || 3 3 3 is a unit eigenvector corresponding to λ = 2. 1 1 1 √ √ −√ 6 2 3 1 1 1 −√ √ √ and S T AS = diag(−1, 1, 2). Thus, S = 6 2 3 2 1 √ √ 0 6 3 0 If λ = 1 then (A − λI )v = 0 assumes the form 0 −1 3−λ 3 4 4 0 = 3−λ or λ = 8. 5 If λ = −2 then (A − λI )v = 0 assumes the form 3 4 10. det(A − λI ) = 0 ⇐⇒ 3 3−λ 0 0 ⇐⇒ (λ − 3)(λ + 2)(λ − 8) = 0 ⇐⇒ λ = −2, λ = 3, 4 v1 0 0 v2 = 0 =⇒ 4v1 + 5v3 = 0 and 4v2 − 5 v3 0 1 v1 3 4 = −√ , √ , √ 3v3 = 0. v1 = (−5, 3, 4), is an eigenvector corresponding to λ = −2 and w1 = ||v1 || 25252 is a unit eigenvector corresponding to λ = −2. 034 v1 0 If λ = 3 then (A − λI )v = 0 assumes the form 3 0 0 v2 = 0 =⇒ v1 = 0 and 3v2 + 4v3 = 0. 400 v3 0 v2 43 v2 = (0, −4, 3), is an eigenvector corresponding to λ = 3 and w2 = = 0, − , is a unit eigenvector ||v2 || 55 corresponding to λ = 3. −5 3 4 v1 0 0 v2 = 0 =⇒ 4v1 − 5v3 = 0 and If λ = 8 then (A − λI )v = 0 assumes the form 3 −5 4 0 −5 v3 0 1 3 4 v3 = √,√,√ 4v2 − 3v3 = 0. v3 = (5, 3, 4), is an eigenvector corresponding to λ = 8 and w3 = ||v3 || 25252 is a unit eigenvector corresponding to λ = 8. 1 1 −√ 0√ 2 2 3 4 3 √ √ and S T AS = diag(−2, 3, 8). − Thus, S = 5 5 2 52 4 3 4 √ √ 5 52 52 3 5 0 431 −3 − λ 2 2 2 2 = 0 ⇐⇒ (λ + 5)2 (λ − 1) = 0 ⇐⇒ λ = −5 of −3 − λ multiplicity two, or λ = 1. −4 2 2 v1 0 2 v2 = 0 =⇒ v1 − v3 = 0 and If λ = 1 then (A − λI )v = 0 assumes the form 2 −4 2 2 −4 v3 0 1 1 1 v1 = √ ,√ ,√ is a v2 − v3 = 0. v1 = (1, 1, 1) is an eigenvector corresponding to λ = 1 and w1 = ||v1 || 3 3 3 unit eigenvector corresponding to λ = 1. 222 v1 0 If λ = −5 then (A − λI )v = 0 assumes the form 2 2 2 v2 = 0 =⇒ v1 + v2 + v3 = 0. 222 v3 0 v2 = (−1, 1, 0) and v3 = (−1, 0, 1) are linearly independent eigenvectors corresponding to λ = −5. v2 and v3 are not orthogonal since v2 , v3 = −1 = 0, so we will use the Gram-Schmidt procedure. Let u2 = v2 = (−1, 1, 0), so 11. det(A − λI ) = 0 ⇐⇒ u3 = v3 − 2 −3 − λ 2 1 v3 , u2 u2 = (−1, 0, 1) − (−1, 1, 0) = ||u2 ||2 2 11 − ,− ,1 . 22 1 u3 1 2 u2 1 1 = − √ , √ , 0 and w3 = = −√ , −√ , √ ||u2 || ||u3 || 2 2 6 6 6 corresponding to λ = −5. 1 1 1 √ −√ −√ 3 2 6 1 1 1 √ √ − √ and S T AS = diag(1, −5, −5). Thus, S = 3 2 6 1 2 √ √ 0 3 6 Now w2 = 12. det(A − λI ) = 0 ⇐⇒ −λ 1 1 1 −λ 1 1 1 −λ are orthonormal eigenvectors = 0 ⇐⇒ (λ + 1)2 (λ − 2) = 0 ⇐⇒ λ = −1 of multiplicity two, or λ = 2. 111 v1 0 If λ = −1 then (A − λI )v = 0 assumes the form 1 1 1 v2 = 0 =⇒ v1 + v2 + v3 = 0. 0 111 v3 v1 = (−1, 0, 1) and v2 = (−1, 1, 0) are linearly independent eigenvectors corresponding to λ = −1. v1 and v2 are not orthogonal since v1 , v2 = 1 = 0, so we will use the Gram-Schmidt procedure. Let u1 = v1 = (−1, 0, 1), so u2 = v2 − v2 , u1 1 u1 = (−1, 1, 0) − (−1, 0, 1) = ||u1 ||2 2 1 1 u1 = − √ , 0, √ ||u1 || 2 2 corresponding to λ = −1. 1 1 − , 1, − 2 2 . u2 1 2 1 = −√ , √ , −√ are orthonormal eigenvectors ||u2 || 6 6 6 −2 1 1 v1 0 1 v2 = 0 =⇒ v1 − v3 = 0 and If λ = 2 then (A − λI )v = 0 assumes the form 1 −2 1 1 −2 v3 0 Now w1 = and w2 = 432 v2 − v3 = 0. v3 = (1, 1, 1), is an eigenvector corresponding to λ = 2 and w3 = v3 = ||v3 || 1 1 1 √ ,√ ,√ 3 3 3 is a unit eigenvector corresponding to = 2. λ 1 1 1 √ −√ −√ 2 6 3 2 1 √ √ and S T AS = diag(−1, −1, 2). 0 Thus, S = 6 3 1 1 1 √ √ −√ 2 6 3 13. A has eigenvalues λ1 = 4 and λ2 = −2 with corresponding eigenvectors v1 = (1, 1) and v2 = (−1, 1). 1 1 Therefore, a set of principal axes is √ (1, 1), √ (−1, 1) . Relative to these principal axes, the quadratic 2 2 2 2 form reduces to 4y1 − 2y2 . 14. A has eigenvalues λ1 = 7 and λ2 = 3 with corresponding eigenvectors v1 = (1, 1) and v2 = (1, −1). 1 1 Therefore, a set of principal axes is √ (1, 1), √ (1, −1) . Relative to these principal axes, the quadratic 2 2 2 2 form reduces to 7y1 + 3y2 . 15. A has eigenvalue λ = 2 of multiplicity two with corresponding linearly independent eigenvectors v1 = (1, 0, −1) and v2 = (0, 1, 1). Using the Gram-Schmidt procedure, an orthogonal basis in this eigenspace is {u1 , u2 } where 1 1 u1 = (1, 0, −1), u2 = (0, 1, 1) + (1, 0, −1) = (1, 2, 1). 2 2 1 1 An orthonormal basis for the eigenspace is √ (1, 0, −1), √ (1, 2, 1) . The remaining eigenvalue of A is 2 6 λ = −1, with eigenvector v3 = (−1, 1, −1). Consequently, a set of principal axes for the given quadratic 1 1 1 form is √ (1, 0, −1), √ (1, 2, 1), √ (−1, 1, −1) . Relative to these principal axes, the quadratic form 2 6 3 2 2 2 reduces to 2y1 + 2y2 − y3 . 16. A has eigenvalue λ = 0 of multiplicity two with corresponding linearly independent eigenvectors v1 = (0, 1, 0, −1) and v2 = (1, 0, −1, 0). Notice that these are orthogonal, hence we do not need to apply the Gram-Schmidt procedure. The remaining eigenvalues of A are λ = 4 and λ = 8, with corresponding eigenvectors v3 = (−1, 1, −1, 1) and v4 = (1, 1, 1, 1), respectively. Consequently, a set of principal axes for the given quadratic form is 1 1 1 1 √ (0, 1, 0, −1), √ (1, 0, −1, 0), (−1, 1, −1, 1), (1, 1, 1, 1) . 2 2 2 2 2 2 Relative to these principal axes, the quadratic form reduces to 4y3 + 8y4 . 17. A = ab bc where a, b, c ∈ R. a−λ c = 0 ⇐⇒ λ2 − (a + c)λ + (ac − b2 ) = 0 c b−λ (a + c) ± (a − c)2 + 4b2 ⇐⇒ λ = . Now A has repeated eigenvalues 2 a0 ⇐⇒ (a − c)2 + 4b2 = 0 ⇐⇒ a = c and b = 0 (since a, b, c ∈ R) ⇐⇒ A = 0a ⇐⇒ A = aI2 ⇐⇒ A is a scalar matrix. Consider det(A − λI ) = 0 ⇐⇒ 433 18.(a) A is a real n × n symmetric matrix =⇒ A possesses a complete set of eigenvectors =⇒ A is similar to a diagonal matrix =⇒ there exists an invertible matrix S such that S −1 AS = diag(λ, λ, λ, . . . , λ) where λ occurs n times. But diag(λ, λ, λ, . . . , λ) = λIn , so S −1 AS = λIn =⇒ AS = S (λIn ) =⇒ AS = λS =⇒ A = λSS −1 =⇒ A = λIn =⇒ A is a scalar matrix (b) Theorem: Let A be a nondefective n × n matrix. If λ is an eigenvalue of multiplicity n, then A is a scalar matrix. Proof: The proof is the same as that in part (a) since A has a complete set of eigenvectors. 19. Since real eigenvectors of A that correspond to distinct eigenvalues are orthogonal, it must be the case that if y = (y1 , y2 ) corresponds to λ2 where Ay = λ2 y, then x, y = 0 =⇒ (1, 2), (y1 , y2 ) = 0 =⇒ y1 + 2y2 = 0 =⇒ y1 = −2y2 =⇒ y = (−2y2 , y2 ) = y2 (−2, 1). Consequently, (−2, 1) is an eigenvector corresponding to λ2 . 20. (a) Let A be a real symmetric 2 × 2 matrix with two distinct eigenvalues, λ1 and λ2 , where v1 = (a, b) is an eigenvector corresponding to λ1 . Since real eigenvectors of A that correspond to distinct eigenvalues are orthogonal, it follows that if v2 = (c, d) corresponds to λ2 where av2 = λ2 v2 , then v1 , v2 = 0 =⇒ (a, b), (c, d) = 0, that is, ac + bd = 0. By inspection, we see that v2 = (−b, a). An orthonormal set of eigenvectors for A is 1 1 (a, b), √ (−b, a) . 2 2 + b2 +b a 1 1 √ a −√ b 2+ 2 a2 + b2 , then S T AS = diag(λ , λ ). Thus, if S = a 1 b 1 2 1 √ √ b a a2 + b2 a2 + b2 √ a2 (b) S T AS = diag(λ1 , λ2 ) =⇒ AS = S diag(λ1 , λ2 ), since S T = S −1 . Thus, 1 a −b λ1 0 ab A = S diag(λ1 , λ2 )S T = 2 a 0 λ2 −b a (a + b2 ) b 1 a −b λ1 a λ 1 b =2 a −λ2 b λ2 a (a + b2 ) b = (a2 1 + b2 ) λ1 a2 + λ2 b2 ab(λ1 − λ2 ) ab(λ1 − λ2 ) λ1 b2 + λ2 a2 . 21. A is a real symmetric 3 × 3 matrix with eigenvalues λ1 and λ2 of multiplicity two. (a) Let v1 = (1, −1, 1) be an eigenvector of A that corresponds to λ1 . Since real eigenvectors of A that correspond to distinct eigenvalues are orthogonal, it must be the case that if v = (a, b, c) corresponds to λ2 where Av = λ2 v, then v1 , v = 0 =⇒ (1, −1, 1), (a, b, c) = 0 =⇒ a − b + c = 0 =⇒ v = r(1, 1, 0)+ s(−1, 0, 1) where r and s are free variables. Consequently, v2 = (1, 1, 0) and v3 = (−1, 0, 1) are linearly independent eigenvectors corresponding to λ2 . Thus, {(1, 1, 0), (−1, 0, 1)} is a basis for E2 . v2 and v3 are not orthogonal since v2 , v3 = −1 = 0, so we will apply the Gram-Schmidt procedure to v2 and v3 . Let u2 = v2 = (1, 1, 0) and 1 11 v3 , u2 u2 = (−1, 0, 1) + (1, 1, 0) = − , , 1 . u3 = v3 − ||u2 ||2 2 22 Now, w1 = v1 = ||v1 || 1 1 1 √ , −√ , √ 3 3 3 is a unit eigenvector corresponding to λ1 , and 434 1 u3 1 2 1 1 √ , √ , 0 , w3 = are orthonormal eigenvectors corresponding = −√ , √ , √ ||u3 || 2 2 6 6 6 to λ2 . 1 1 1 √ √ −√ 3 2 6 1 1 1 −√ √ √ and S T AS = diag(λ1 , λ2 , λ2 ). Consequently, S = 3 2 6 1 2 √ √ 0 3 6 w2 = u2 = ||u2 || (b) Since S is an orthogonal matrix, S T AS = diag(λ1 , λ2 , λ2 ) =⇒ AS = S diag(λ1 , λ2 , λ3 ) T =⇒ A = S diag(λ1 , λ2 , λ3 )S =⇒ 1 1 1 1 1 1 √ √ √ −√ −√ √ 3 2 6 3 3 3 0 1 1 λ1 0 1 1 1 −√ √ √ √ √ 0 0 λ2 0 A= 3 2 6 2 2 0 0 λ2 1 1 2 1 2 √ √ √ √ −√ 0 3 6 6 6 6 λ1 λ1 λ1 1 1 1 √ √ √ √ −√ −√ 3 3 3 3 2 6 λ1 + 2λ2 −λ1 + λ2 λ1 − λ 2 1 1 1 1 λ λ √ √ √2 √2 0 = −λ1 + λ2 λ1 + 2λ2 −λ1 + λ2 . = −√ 3 3 2 6 2 2 λ1 − λ2 −λ1 + λ2 λ1 + 2λ2 1 2 λ2 λ 2 2λ 2 √ √ 0 √ √ −√ 3 6 6 6 6 T 22. (a) Let v1 , v2 ∈ Cn and recall that v1 v2 = [ v1 , v2 ]. T TT T T T T [ Av1 , v2 ] = (Av1 ) v2 = (v1 A )v2 = v1 (−A)v2 = −v1 (A)v2 = −v1 (Av2 ) = −v1 Av2 = [− v1 , Av2 ]. Thus, Av1 , v2 = − v1 , Av2 . (22.1) (b) Let v1 be an eigenvector corresponding to the eigenvalue λ1 , so Av1 = λ1 v1 . (22.2) Taking the inner product of (22.2) with v1 yields Av1 , v1 = λ1 v1 , v1 , that is Av1 , v1 = λ1 ||v1 ||2 . (22.3) Taking the complex conjugate of (22.3) gives Av1 , v1 = λ1 ||v1 ||2 , that is v1 , Av1 = λ1 ||v1 ||2 . (22.4) Adding (22.3) and (22.4), and using (22.1) with v2 = v1 yields (λ1 + λ1 )||v1 ||2 = 0. But ||v1 ||2 = 0, so λ1 + λ1 = 0, or equivalently, λ1 = −λ1 , which means that all nonzero eigenvalues of A are pure imaginary. 23. Let A be an n × n real skew-symmetric matrix where n is odd. Since A is real, the characteristic equation, det(A − λI ) = 0, has real coeﬃcients, so its roots come in conjugate pairs. By problem 20, all nonzero solutions of det(A − λI ) = 0 are pure imaginary, hence when n is odd, zero will be one of the eigenvalues of A. 24. det(A − λI ) = 0 ⇐⇒ −λ 4 −4 −4 −λ −2 4 2 −λ = 0 ⇐⇒ λ3 + 36λ = 0 ⇐⇒ λ = 0 or λ = ±6i. 435 0 4 −4 v1 0 If λ = 0 then (A − λI )v = 0 assumes the form −4 0 −2 v2 = 0 =⇒ 2v1 + v3 = 0 and 42 0 v3 0 v2 − v3 = 0. If we let v3 = 2r ∈ C, then the solution set of this system is {(−r, 2r, 2r) : r ∈ C} so the eigenvectors corresponding to λ = 0 are v1 = r( 1, 2, 2) where r ∈ C. − 6i 4 −4 v1 0 If λ = −6i then (A−λI )v = 0 assumes the form −4 6i −2 v2 = 0 =⇒ 5v1 +(−2+6i)v3 = 0 42 6i v3 0 and 5v2 + (4 + 3i)v3 = 0. If we let v3 = 5s ∈ C, then the solution set of this system is {(2 − 6i)s, (−4 − 3i)s, 5s : s ∈ C}, so the eigenvectors corresponding to λ = −6i are v2 = s(2 − 6i, −4 − 3i, 5) where s ∈ C. By Theorem 5.6.8, since A is a matrix with real entries, the eigenvectors corresponding to λ = 6i are of the form v3 = t(2 + 6i, −4 + 3i, 5) where t ∈ C. 25. det(A − λI ) = 0 ⇐⇒ −λ −1 −6 1 −λ 5 6 −5 −λ √ = 0 ⇐⇒ −λ3 − 62λ = 0 ⇐⇒ λ = 0 or λ = ± 62i. v1 0 0 −1 −6 0 5 v2 = 0 =⇒ v3 = t, v2 = −6t, v1 = If λ = 0 then (A − λI )v = 0 assumes the form 1 v3 0 6 −5 0 −5t, where t ∈ C. Thus, the solution set of the system is {(−5t, −6t, t) : t ∈ C}, so the eigenvectors corresponding to λ = 0 are v = t(−5, −6, 1), where t ∈ C. For the other eigenvalues, it is best to use technology to generate the corresponding eigenvectors. 26. A is a real n×n orthogonal matrix ⇐⇒ A−1 = AT ⇐⇒ AT A = In . Suppose that A = [v1 , v2 , v3 , . . . , vn ]. T Since the ith row of AT is equal to vi , the matrix multiplication assures us that the ij th entry of AT A is T equal to vi , vi . Thus from the equality AT A = In = [δij ], an n × n matrix A = [v1 , v2 , v3 , . . . , vn ] is T orthogonal if an only if vi , vi = δij , that is, if and only if the columns (rows) of A, {v1 , v2 , v3 , . . . , vn }, form an orthonormal set of vectors. Solutions to Section 5.11 True-False Review: 1. TRUE. See Remark 1 following Deﬁnition 5.11.1. 2. TRUE. Each Jordan block corresponds to a cycle of generalized eigenvectors, and each such cycle contains exactly one eigenvector. By construction, the eigenvectors (and generalized eigenvectors) are chosen to form a linearly independent set of vectors. 3. FALSE. For example, in a diagonalizable n × n matrix, the n linearly independent eigenvectors can be arbitrarily placed in the columns of the matrix S . Thus, an ample supply of invertible matrices S can be constructed. 01 4. FALSE. For instance, if J1 = J2 = , then J1 and J2 are in Jordan canonical form, but 00 02 J1 + J2 = is not in Jordan canonical form. 00 5. TRUE. This is simply a restatement of the deﬁnition of a generalized eigenvector. 6. FALSE. The number of Jordan blocks corresponding to λ in the Jordan canonical form of A is the number of linearly independent eigenvectors of A corresponding to λ, which is dim[Eλ ], the dimension of the eigenspace corresponding to λ, not dim[Kλ ]. 436 7. TRUE. This is the content of Theorem 5.11.8. 11 8. FALSE. For instance, if J1 = J2 = 01 12 J1 J2 = is not in Jordan canonical form. 01 , then J1 and J2 are in Jordan canonical form, but 9. TRUE. If we place the vectors in a cycle of generalized eigenvectors of A (see Equation (5.11.3)) in the columns of the matrix S formulated in this section in the order they appear in the cycle, then the corresponding columns of the matrix S −1 AS will form a Jordan block. 10. TRUE. The assumption here is that all Jordan blocks have size 1 × 1, which precisely says that the Jordan canonical form of A is a diagonal matrix. This means that A is diagonalizable. 11. TRUE. Suppose that S −1 AS = B and that J is a Jordan canonical form of A. So there exists an invertible matrix T such that T −1 AT = J . Then B = S −1 AS = S −1 (T JT −1 )S = (T −1 S )−1 J (T −1 S ), and hence, (T −1 S )B (T −1 S )−1 = J, which shows that J is also a Jordan canonical form of B . 01 12. FALSE. For instance, if J = and r = 2, then J is in Jordan canonical form, but rJ = 00 is not in Jordan canonical form. 0 0 Problems: 1. There are 3 possible Jordan canonical forms: 1 100 0 1 0 , 0 0 001 2. There are 4 possible Jordan canonical forms: 1100 1000 0 1 0 0 , 0 1 0 0 0 0 3 0 0 0 3 0 0003 0003 1 1 0 0 0 , 1 1 0 0 0 1 . 1 1 1 0 , 1 0 0 0 0 1 0 0 0 0 3 0 , 1 2 0 0 0 0 0 2 0 0 0 0 1 2 0 1 1 0 0 0 0 , 1 3 1 0 0 0 0 0 3 0 0 0 . 1 3 2 0 0 0 0 1 2 0 0 0 0 1 2 0 0 3. There are 7 possible Jordan canonical forms: 2 0 0 0 0 0 2 0 0 0 0 0 2 0 0 0 0 0 2 0 0 0 0 0 2 2 0 0 0 0 , 1 2 0 0 0 0 1 2 0 0 0 0 0 2 0 2 0 0 0 0 1 2 0 0 0 0 0 0 1 2 0 0 2 0 0 0 0 0 2 0 , 0 0 0 0 2 2 0 0 0 0 1 2 0 0 0 0 1 2 0 0 0 0 1 2 0 2 0 0 0 0 0 0 0 0 2 , 0 0 0 0 2 2 0 0 0 0 , 1 2 0 0 0 0 1 2 0 0 0 0 1 2 0 0 0 0 1 2 0 0 0 2 0 0 0 0 0 2 2 0 437 4. There are 10 possible Jordan canonical forms: 3 0 0 0 0 0 0 3 0 0 0 0 0 0 3 0 0 0 0 0 0 3 0 0 0 0 0 0 9 0 0 0 0 0 0 9 , 3 0 0 0 0 0 1 3 0 0 0 0 0 1 3 0 0 0 0 0 1 3 0 0 0 0 0 1 9 0 0 0 0 0 0 9 , 3 0 0 0 0 0 1 3 0 0 0 0 0 0 3 0 0 0 0 0 0 3 0 0 0 0 0 0 9 0 0 0 0 0 0 9 , 3 0 0 0 0 0 0 3 0 0 0 0 0 0 3 0 0 0 0 0 0 3 0 0 0 0 0 0 9 0 0 0 0 0 1 9 , 3 0 0 0 0 0 1 3 0 0 0 0 0 1 3 0 0 0 0 0 0 3 0 0 0 0 0 0 9 0 0 0 0 0 1 9 , 3 0 0 0 0 0 1 3 0 0 0 0 0 0 3 0 0 0 0 0 1 3 0 0 0 0 0 0 9 0 0 0 0 0 0 9 , 3 0 0 0 0 0 1 3 0 0 0 0 0 0 3 0 0 0 0 0 0 3 0 0 0 0 0 0 9 0 0 0 0 0 1 9 , 3 0 0 0 0 0 1 3 0 0 0 0 0 1 3 0 0 0 0 0 1 3 0 0 0 0 0 1 9 0 0 0 0 0 1 9 3 0 0 0 0 0 1 3 0 0 0 0 0 1 3 0 0 0 0 0 0 3 0 0 0 0 0 0 9 0 0 0 0 0 0 9 3 0 0 0 0 0 1 3 0 0 0 0 0 0 3 0 0 0 0 0 1 3 0 0 0 0 0 0 9 0 0 0 0 0 1 9 , , . 5. Since λ = 2 occurs with multiplicity 4, it can give rise to the following possible Jordan block sizes: (a) 4 (b) 3,1 (c) 2,2 (d) 2,1,1 (e) 1,1,1,1 Likewise, λ = 6 occurs with multiplicity 4, so it can give rise to the same ﬁve possible Jordan block sizes. Finally, λ = 8 occurs with multiplicity 3, so it can give rise to three possible Jordan block sizes: (a) 3 (b) 2,1 (c) 1,1,1 Since the block sizes for each eigenvalue can be independently determined, we have 5 · 5 · 3 = 75 possible Jordan canonical forms. 6. Since λ = 2 occurs with multiplicity 4, it can give rise to the following possible Jordan block sizes: (a) 4 (b) 3,1 (c) 2,2 (d) 2,1,1 (e) 1,1,1,1 Next, λ = 5 occurs with multiplicity 6, so it can give rise to the following possible Jordan block sizes: (a) 6 (b) 5,1 (c) 4,2 (d) 4,1,1 (e) 3,3 438 (f) 3,2,1 (g) 3,1,1,1 (h) 2,2,2 (i) 2,2,1,1 (j) 2,1,1,1,1 (k) 1,1,1,1,1,1 There are 5 possible Jordan block sizes corresponding to λ = 2 and 11 possible Jordan block sizes corresponding to λ = 5. Multiplying these results, we have 5 · 11 = 55 possible Jordan canonical forms. 7. Since (A − 5I )2 = 0, no cycles of generalized eigenvectors corresponding to λ = 5 can have length greater than 2, and hence, only Jordan block sizes 2 or less are possible. Thus, the possible block sizes under this restriction (corresponding to λ = 5) are: 2,2,2 2,2,1,1 2,1,1,1,1 1,1,1,1,1,1 There are four such. There are still ﬁve possible block size lists corresponding to λ = 2. Multiplying these results, we have 5 · 4 = 20 possible Jordan canonical forms under this restriction. 8. (a): λ1 0 0 0 0 0 λ1 0 0 0 0 0 λ1 0 0 0 0 0 λ2 0 0 0 0 0 λ2 λ1 0 0 0 0 0 λ1 0 0 0 0 0 λ1 0 0 0 0 0 λ2 0 0 0 0 1 λ2 , , λ1 0 0 0 0 1 λ1 0 0 0 0 0 λ1 0 0 0 0 0 λ2 0 0 0 0 0 λ2 λ1 0 0 0 0 1 λ1 0 0 0 0 0 λ1 0 0 0 0 0 λ2 0 0 0 0 1 λ2 , , λ1 0 0 0 0 1 λ1 0 0 0 0 1 λ1 0 0 0 0 0 λ2 0 0 0 0 0 λ2 λ1 0 0 0 0 1 λ1 0 0 0 0 1 λ1 0 0 0 0 0 λ2 0 0 0 0 1 λ2 , (b): The assumption that (A − λ1 I )2 = 0 implies that there can be no Jordan blocks corresponding to λ1 of size 3 × 3 (or greater). Thus, the only possible Jordan canonical forms for this matrix now are λ1 0 0 0 0 λ1 1 0 0 0 0 λ1 0 0 0 0 λ1 0 0 0 0 , 0 0 λ1 0 0 0 λ1 0 0 , 0 0 0 λ2 0 0 0 0 λ2 0 0 0 0 0 λ2 0 0 0 0 λ2 λ1 0 0 0 0 λ1 1 0 0 0 0 λ1 0 0 0 0 λ1 0 0 0 0 , 0 0 λ1 0 0 0 λ1 0 0 0 0 0 λ2 1 0 0 0 λ2 1 0 0 0 0 λ2 0 0 0 0 λ2 9. The assumption that (A − λI )3 = 0 implies no Jordan blocks of size greater than 3 × 3 are possible. The fact that (A − λI )2 = 0 implies that there is at least one Jordan block of size 3 × 3. Thus, the possible block 439 size combinations for a 6 × 6 matrix with eigenvalue λ of multiplicity 6 and no blocks of size greater than 3 × 3 with at least one 3 × 3 block are: 3,3 3,2,1 3,1,1,1 Thus, there are 3 possible Jordan canonical forms. (We omit the list itself; it can be produced simply from the list of block sizes above.) 10. The eigenvalues of the matrix with this characteristic polynomial are λ = 4, 4, −6. The possible Jordan canonical forms in this case are therefore: 40 0 41 0 0 4 0 , 0 4 0 . 0 0 −6 0 0 −6 11. The eigenvalues of the matrix with this characteristic polynomial are λ = 4, 4, 4, −1, −1. The possible Jordan canonical forms in this case are therefore: 4 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 4 0 0 0 −1 0 0 0 −1 4 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 4 0 0 0 −1 1 0 0 −1 , , 4 0 0 0 0 1 4 0 0 0 0 0 0 0 0 0 4 0 0 0 −1 0 0 0 −1 4 0 0 0 0 1 4 0 0 0 0 0 0 0 0 0 4 0 0 0 −1 1 0 0 −1 , , 4 0 0 0 0 1 4 0 0 0 0 0 0 1 0 0 4 0 0 0 −1 0 0 0 −1 4 0 0 0 0 1 4 0 0 0 0 0 0 1 0 0 4 0 0 0 −1 1 0 0 −1 , . 12. The eigenvalues of the matrix with this characteristic polynomial are λ = −2, −2, −2, 0, 0, 3, 3. The possible Jordan canonical forms in this case are therefore: −2 0 0 0 −2 0 0 0 −2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 3 −2 0 0 0 −2 0 0 0 −2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 1 3 , , −2 0 0 0 −2 0 0 0 −2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 00 00 10 0 03 00 0 0 0 0 0 0 3 −2 1 0 0 −2 0 0 0 −2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 3 0 , , −2 0 0 0 −2 0 0 0 −2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 1 3 −2 1 0 0 −2 0 0 0 −2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 00 00 10 0 03 00 0 0 0 0 0 0 3 , , 440 −2 1 0 0 −2 0 0 0 −2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 −2 1 0 0 −2 0 0 0 −2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 1 3 −2 1 0 0 −2 1 0 0 −2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 −2 1 0 000 1 0 0 0 0 −2 0 −2 0 0 0 0 0 0 1 0 0 , 0 0 0 0 0 0 0 0 0 3 0 0 003 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 1 3 0 0 0 0 0 3 0 0 0 0 0 0 1 3 , , , −2 1 0 0 −2 1 0 0 −2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 3 −2 1 0 0 −2 1 0 0 −2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 1 3 , . 13. The eigenvalues of the matrix with this characteristic polynomial are λ = −2, −2, 6, 6, 6, 6, 6. The possible Jordan canonical forms in this case are therefore: −2 0 0 −2 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 6 −2 0 0 −2 0 0 0 0 0 0 0 0 0 0 −2 0 0 −2 0 0 0 0 0 0 0 0 0 0 −2 10000 0 −2 0 0 0 0 0 06000 0 00600 0 00060 0 00006 0 00000 0 0 0 0 0 0 6 −2 000 0 −2 0 0 0 060 , 0 006 0 000 0 000 0 000 00000 0 0 0 0 0 6 1 0 0 0 0 6 1 0 0 , 0 0 6 0 0 0 0 0 6 0 00006 00000 0 0 0 0 0 6 1 0 0 0 0 6 1 0 0 , 0 0 6 1 0 0 0 0 6 0 00006 −2 100 0 −2 0 0 0 061 , 0 006 0 000 0 000 0 000 0 0 0 1 6 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 6 −2 0 0 −2 0 0 0 0 0 0 0 0 0 0 −2 0 0 −2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 6 −2 00 0 −2 0 0 06 , 0 00 0 00 0 00 0 00 00000 0 0 0 0 0 6 1 0 0 0 0 6 1 0 0 0 0 6 0 0 0 0 0 6 1 00006 00000 0 0 0 0 0 6 1 0 0 0 0 6 1 0 0 0 0 6 1 0 0 0 0 6 1 00006 −2 10 0 −2 0 0 06 , 0 00 0 00 0 00 0 00 000 000 100 600 061 006 000 0 0 0 0 0 0 6 000 000 100 600 061 006 000 0 0 0 0 0 0 6 , , 441 −2 1 0 −2 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 1 6 0 0 0 0 0 0 1 6 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 6 −2 1 0 −2 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 1 6 0 0 0 0 0 0 1 6 0 0 0 0 0 0 1 6 0 0 0 0 0 0 0 6 , , −2 100000 0 −2 0 0 0 0 0 0 0 6 1 0 0 0 0 0 0 6 1 0 0 0 0 0 0 6 0 0 0 0 0 0 0 6 1 0 000006 −2 100000 0 −2 0 0 0 0 0 0 0 6 1 0 0 0 0 0 0 6 1 0 0 0 0 0 0 6 1 0 0 0 0 0 0 6 1 0 000006 14. Of the Jordan canonical forms in Problem 13, we are asked here to ﬁnd the ones that contain exactly ﬁve Jordan blocks, since there is a correspondence between the Jordan blocks and the linearly independent eigenvectors. There are three: −2 000000 −2 000000 −2 100000 0 −2 0 0 0 0 0 0 −2 0 0 0 0 0 0 −2 0 0 0 0 0 0 0 6 1 0 0 0 0 6 1 0 0 0 0 0 6 1 0 0 0 0 0 0 0 6 1 0 0 0 0 6 0 0 0 , 0 0 0 6 0 0 0 , 0 0 0 0 0 6 0 0 0 0 0 6 1 0 0 0 0 0 6 0 0 0 0 0 0 0 0 6 0 0 0 0 0 6 0 0 0 0 0 0 6 0 0 0 000006 0 000006 0 000006 15. Many examples are possible here. Let A = 0 is not an eigenvector since Av = 1 eigenvector of A corresponding to λ = 0. v= 1 0 0 0 1 0 . The only eigenvalue of A is 0. The vector = λv. However A2 = 02 , so every vector is a generalized 01 16. Many examples are possible here. Let A = 0 0 00 0 1 v = 1 is not an eigenvector since Av = 0 = λv. 0 0 eigenvector of A corresponding to λ = 0. 0 0 . The only eigenvalue of A is 0. The vector 0 However, A2 = 03 , so every vector is a generalized 17. The characteristic polynomial is det(A − λI ) = det 1−λ −1 1 3−λ = (1 − λ)(3 − λ) + 1 = λ2 − 4λ + 4 = (λ − 2)2 , with roots λ = 2, 2. We have A − 2I = −1 −1 1 1 ∼ 1 −1 0 0 . 442 Because there is only one unpivoted column in this latter matrix, we only have one eigenvector for A. Hence, A is not diagonalizable, and therefore 21 JCF(A) = . 02 To determine the matrix S , we must ﬁnd a cycle of generalized eigenvectors of length 2. Therefore, it suﬃces 0 to ﬁnd a vector v in R2 such that (A − 2I )v = 0. Many choices are possible here. We take v = . Then 1 1 (A − 2I )v = . Thus, we have 1 10 S= . 11 18. The characteristic polynomial is 1−λ 1 1 1−λ 1 = (1 − λ) (1 − λ)2 − 1 = (1 − λ)(λ2 − 2λ) = λ(1 − λ)(λ − 2), det(A − λI ) = det 0 0 1 1−λ with roots λ = 0, 1, 2. Since A is a 3 × 3 matrix with three distinct eigenvalues, it is diagonalizable. Therefore, it’s Jordan canonical form is simply a diagonal matrix with the eigenvalues as its diagonal entries: 000 JCF(A) = 0 1 0 . 002 To determine the invertible matrix S , we must ﬁnd eigenvectors associated with each eigenvalue. Eigenvalue λ = 0: Consider 111 nullspace(A) = nullspace 0 1 1 , 011 111 and this latter matrix can be row reduced to 0 1 1 . The equations corresponding to the rows of this 000 matrix are x + y + z = 0 and y + z = 0. Setting z = t, then y = −t and x = 0. With t = 1 this gives us the eigenvector (0, −1, 1). Eigenvalue λ = 1: Consider 0 nullspace(A − I ) = nullspace 0 0 1 0 1 1 1 . 0 By inspection, we see that z = 0 and y = 0 are required, but x = t is free. Thus, an eigenvector associated with λ = 1 may be chosen as (1, 0, 0). Eigenvalue λ = 2: Consider −1 1 1 1 , nullspace(A − 2I ) = nullspace 0 −1 0 1 −1 443 1 −1 −1 1 −1 . Setting z = t, we have y = t and x = 2t. Thus, with t = 1 which can be row-reduced to 0 0 0 0 we obtain the eigenvector (2, 1, 1) associated with λ = 2. Placing the eigenvectors obtained as the columns of S (with columns corresponding to the eigenvalues of JCF(A) above), we have 012 S = −1 0 1 . 101 19. We can get the characteristic polynomial by using cofactor expansion along the second column as follows: 5−λ nullspace(A−λI ) = det 1 1 0 4−λ 0 −1 −1 = (4−λ) [(5 − λ)(3 − λ) + 1] = (4−λ)(λ2 −8λ+16) = (4−λ)(λ−4)2 , 3−λ with roots λ = 4, 4, 4. 1 0 −1 We have A − 4I = 1 0 −1 , and so vectors (x, y, z ) in the nullspace of this matrix must satisfy 1 0 −1 x − z = 0. Setting z = t and y s, have x = t. Hence, we obtain two linearly independent eigenvectors = we 1 0 of A corresponding to λ = 4: 0 and 1 . Therefore, JCF(A) contains exactly two Jordan blocks. 1 0 This uniquely determines JCF(A), up to a rearrangement of the Jordan blocks: 410 JCF(A) = 0 4 0 . 004 To determine the matrix S , we must seek a generalized eigenvector. It is easy to verify that (A − 4I )2 = 03 , so every nonzero vector v is a generalized eigenvector. We must choose one such that (A − 4I ) = 0 in v 1 order to form a cycle of length 2. There are many choices here, but let us choose v = 0 . Then 0 1 (A − 4I )v = 1 . Notice that this is an eigenvector of A corresponding to λ = 4. To complete the matrix 1 S , we will need a second linearly independent eigenvector. Again, there are a multitude of choices. Let us 0 choose the eigenvector 1 found above. Thus, 0 1 S= 1 1 1 0 0 0 1 . 0 20. We will do cofactor expansion along the ﬁrst column of the matrix to obtain the characteristic polynomial: 444 4−λ det(A − λI ) = det −1 −1 −4 5 4−λ 2 2 4−λ = (4 − λ)(λ2 − 8λ + 12) + (−4)(4 − λ) − 10 − (−8 − 5(4 − λ)) = (4 − λ)(λ2 − 8λ + 12) + (2 − λ) = (4 − λ)(λ − 2)(λ − 6) + (2 − λ) = (λ − 2) [(4 − λ)(λ − 6) − 1] = (λ − 2)(−λ2 + 10λ − 25) = −(λ − 2)(λ − 5)2 , with eigenvalues λ = 2, 5, 5. Since λ = 5 is a repeated eigenvalue, consider this eigenvalue ﬁrst. We must consider the nullspace we 1 1 −2 −1 −4 5 2 , which can be row-reduced to 0 3 −3 , a matrix that has of the matrix A − 5I = −1 −1 00 0 −1 2 −1 only one unpivoted column, and hence λ = 5 only yields one linearly independent eigenvector. Thus, 200 JCF(A) = 0 5 1 . 005 To determine an invertible matrix S , we ﬁrst proceed to ﬁnd a cycle of eigenvectors of length 2 corresponding to λ = 5. Therefore, we must ﬁnd a vector v in R3 such that (A − 5I )v = 0, but (A − 5I )2 v = 0 (in order 1 0 18 −18 that v is a generalized eigenvector). Note that (A − 5I )2 = , so if we set v = 0 , then 0 9 −9 0 −1 (A − 5I )v = −1 and (A − 5I )2 v = 0. Thus, we obtain the cycle −1 1 −1 −1 , 0 . −1 0 Next, corresponding to = 2, we must ﬁnd an eigenvector. We need to λ ﬁnd a nonzero vector (x, y, z ) 2 −4 5 1 −2 −2 2 2 , and this latter matrix can be row-reduced to 0 0 1 . The middle in nullspace −1 −1 22 0 0 0 row requires that z = 0, and if we set y = t, then x = 2t. Thus, by using t = 1, we obtain the eigenvector (2, 1, 0). Thus, we can form the matrix 2 −1 1 S = 1 −1 0 . 0 −1 0 21. We are given that λ = −5 occurs with multiplicity 2 as a root of the characteristic polynomial of A. To 445 search for corresponding eigenvectors, we consider −1 1 0 1 1 , nullspace(A + 5I ) = nullspace − 2 1 2 2 1 1 −2 2 −1 2 1 −1 0 0 1 . Since there is only one unpivoted column in this row-echelon and this matrix row-reduces to 0 0 00 form of A, the eigenspace corresponding to λ = −5 is only one-dimensional. Thus, based on the eigenvalues λ = −5, −5, −6, we already know that −5 1 0 0 . JCF(A) = 0 −5 0 0 −6 Next, we seek a cycle of generalized eigenvectors of length 2 corresponding to λ = −5. The cycle takes the form {(A + 5I )v, v}, where v is a vector such that (A + 5I )2 v = 0. We readily compute that 1 −1 1 2 2 2 0 0 . An obvious vector that is killed by (A + 5I )2 (although other choices are (A + 5I )2 = 0 1 1 −2 1 2 2 0 1 also possible) is v = 1 . Then (A + 5I )v = 1 . Hence, we have a cycle of generalized eigenvectors 1 0 corresponding to λ = −5: 0 1 1 , 1 . 0 1 Now consider the eigenspace corresponding to λ = −6. We need only ﬁnd one eigenvector (x, y, z ) in this eigenspace. To do so, we must compute 010 nullspace(A + 6I ) = nullspace − 1 3 1 , 2 2 2 1 −2 1 1 2 2 1 −3 −1 1 0 . We see that y = 0 and x − 3y − z = 0, which is equivalent and this matrix row-reduces to 0 0 0 0 to x − z = 0. Setting z = t, we have x = t. With t = 1, we obtain the eigenvector (1, 0, 1). Hence, we can form the matrix 101 S = 1 1 0 . 011 22. Because the matrix is upper triangular, the eigenvalues of A appear along the main diagonal: λ = 2, 2, 3. Let us ﬁrst consider the eigenspace corresponding to λ = 2: We consider 0 −2 14 1 −7 , nullspace(A − 2I ) = nullspace 0 0 0 0 446 0 1 −7 0 . There are two unpivoted columns, so this eigenspace is two-dimensional. which row reduces to 0 0 00 0 Therefore, the matrix A is diagonalizable: 200 JCF(A) = 0 2 0 . 003 Next, we must determine an invertible matrix S such that S −1 AS is the Jordan canonical form just obtained. Using the row-echelon form of A − 2I obtained above, vectors (x, y, z ) in nullspace(A − 2I ) must satisfy y − 7z = 0. Setting z = t, y = 7t, and x = s, we obtain the eigenvectors (1, 0, 0) and (0, 7, 1). Next, consider the eigenspace corresponding to λ = 3. We consider −1 −2 14 0 −7 , nullspace(A − 3I ) = nullspace 0 0 0 −1 1 2 −14 1 . Vectors (x, y, z ) in the nullspace of this matrix must satisfy z = 0 which row reduces to 0 0 00 0 and x + 2y − 14z = 0 or x + 2y = 0. Setting y = t and x = −2t. Hence, setting t = 1 gives the eigenvector (−2, 1, 0). Thus, using the eigenvectors obtained above, we obtain the matrix 1 0 −2 1 . S= 0 7 01 0 23. We use the characteristic polynomial to determine the eigenvalues of A: 7−λ −2 2 4−λ −1 det(A − λI ) = det 0 −1 1 4−λ = (4 − λ) [(7 − λ)(4 − λ) + 2] + (7 − λ − 2) = (4 − λ)(λ2 − 11λ + 30) + (5 − λ) = (4 − λ)(λ − 5)(λ − 6) + (5 − λ) = (5 − λ) [1 − (4 − λ)(6 − λ)] = (5 − λ)(λ − 5)2 = −(λ − 5)3 . Hence, the eigenvalues are λ = 5, 5, 5. Let us consider the eigenspace corresponding to λ = 5. We consider 2 −2 2 nullspace(A − 5I ) = nullspace 0 −1 −1 , −1 1 −1 1 −1 1 1 1 , which contains one unpivoted column. Therefore, the and this latter matrix row-reduces to 0 0 00 eigenspace corresponding to λ = 5 is one-dimensional. Therefore, the Jordan canonical form of A consists 447 of one Jordan block: 5 JCF(A) = 0 0 1 5 0 0 1 . 5 A corresponding invertible matrix S in this case must have columns that consist of one cycle of generalized eigenvectors, which will take the form {(A − 5I )2 v, (A − 5I )v, v}, where v is a generalized eigenvector. Now, we can verify quickly that 2 −2 2 20 4 2 , (A − 5I )3 = 03 . A − 5I = 0 −1 −1 , (A − 5I )2 = 1 0 −1 1 −1 −1 0 −2 The fact that (A − 5I )3 = 03 means that every nonzero vector v is a generalized eigenvector. Hence, we 1 simply choose v such that (A − 5I )2 v = 0. There are many choices. Let us take v = 0 . Then 0 2 2 (A − 5I )v = 0 and (A − 5I )2 v = 1 . Thus, we have the cycle of generalized eigenvectors −1 −1 2 2 1 1 , 0 , 0 . −1 −1 0 Hence, we have 2 21 0 0 . S= 1 −1 −1 0 24. Because the matrix is upper triangular, the eigenvalues of A appear along the main diagonal: λ = −1, −1, −1. Let us consider the eigenspace corresponding to λ = −1. We consider 0 −1 0 0 −2 , nullspace(A + I ) = nullspace 0 0 0 0 and it is straightforward to see that the nullspace here consists precisely of vectors that are multiples of (1, 0, 0). Because only one linearly independent eigenvector was obtained, the Jordan canonical form of this matrix consists of only one Jordan block: −1 1 0 1 . JCF(A) = 0 −1 0 0 −1 A corresponding invertible matrix S in this case must have columns that consist of one cycle of generalized eigenvectors, which will take the form {(A + I )2 v, (A + I )v, v}. Here, we have 0 −1 0 002 0 −2 and (A + I )2 = 0 0 0 and (A + I )3 = 03 . A+I = 0 0 0 0 000 448 Therefore, every nonzero vector is a generalized eigenvector. We wish to choose a vector such that v 0 0 (A + I )2 v = 0. There are many choices, but we will choose v = 0 . Then (A + I )v = −2 and 1 0 2 (A + I )2 v = 0 . Hence, we form the matrix S as follows: 0 2 00 S = 0 −2 0 . 0 01 25. We use the characteristic polynomial to determine the eigenvalues of A: 2−λ −1 0 1 0 3−λ −1 0 det(A − λI ) = det 0 1 1−λ 0 0 −1 0 3−λ = (2 − λ)(3 − λ) [(3 − λ)(1 − λ) + 1] = (2 − λ)(3 − λ)(λ2 − 4λ + 4) = (2 − λ)(3 − λ)(λ − 2)2 , and so the eigenvalues are λ = 2, 2, 2, 3. First, consider the eigenspace corresponding to λ = 2. We consider 0 −1 01 0 1 −1 0 , nullspace(A − 2I ) = nullspace 0 1 −1 0 0 −1 01 0 1 −1 0 0 0 1 −1 . There are two unpivoted columns, and and this latter matrix can be row-reduced to 0 0 0 0 00 0 0 therefore two linearly independent eigenvectors corresponding to λ = 2. Thus, we will obtain two Jordan blocks corresponding to λ = 2, and they necessarily will have size 2 × 2 and 1 × 1. Thus, we are already in a position to write down the Jordan canonical form of A: 2100 0 2 0 0 JCF(A) = 0 0 2 0 . 0003 We continue in order to obtain an invertible matrix S such that S −1 AS is in Jordan canonical form. To this end, we see a generalized eigenvector v such that (A − 2I )v = 0 and (A − 2I )2 v = 0. Note that 0 −1 01 0 −2 1 1 0 1 −1 0 0 0 0 and (A − 2I )2 = 0 . A − 2I = 0 0 1 −1 0 0 0 0 0 −1 01 0 −2 1 1 449 0 0 By inspection, we see that by taking v = −1 (there are many other valid choices, of course), then 1 1 1 (A − 2I )v = and (A − 2I )2 v = 0. We also need a second eigenvector corresponding to λ = 2 that is 1 1 linearly independent from (A − 2I )v just obtained. From the row-echelon form of A − 2I , see that all we 1 s 0 t eigenvectors corresponding to λ = 2 take the form , so for example, we can take . 0 t 0 t Next, we consider the eigenspace corresponding to λ = 3. We consider −1 −1 01 0 0 −1 0 . nullspace(A − 3I ) = nullspace 0 1 −2 0 0 −1 00 Now, if (x, y, z, w) is an eigenvector corresponding to λ = 3, the last three rows of the matrix imply that y = z = 0. Thus, the ﬁrst row becomes −x + w = 0. Setting w = t, then x = t, so we obtain eigenvectors in the form (t, 0, 0, t). Setting t = 1 gives the eigenvector (1, 0, 0, 1). Thus, we can now form the matrix S such that S −1 AS is the Jordan canonical form we obtained above: 1 011 1 0 0 0 S= 1 −1 0 0 . 1 101 26. From the characteristic polynomial, we have eigenvalues λ = 2, 2, 4, 4. Let us consider the associated eigenspaces. Corresponding to λ = 2, we seek eigenvectors (x, y, z, w) by computing 0 −4 2 2 −2 −2 1 3 nullspace(A − 2I ) = nullspace −2 −2 1 3 , −2 −6 3 5 2 2 −1 −3 0 2 −1 −1 . Setting w = 2t and z = 2s, we obtain y = s + t and this matrix can be row-reduced to 0 0 0 0 00 0 0 and x = 2t, so we obtain the eigenvectors (2, 1, 0, 2) and (0, 1, 2, 0) corresponding to λ = 2. Next, corresponding to λ = 4, we seek eigenvectors (x, y, z, w) by computing −2 −4 22 −2 −4 1 3 nullspace(A − 4I ) = nullspace −2 −2 −1 3 . −2 −6 33 450 1 2 −1 −1 0 2 −1 −1 . Since there is only one unpivoted column, this eigenspace This matrix can be reduced to 0 0 1 −1 00 0 0 is only one-dimensional, despite λ = 4 occurring with multiplicity 2 as a root of the characteristic equation. Therefore, we must seek a generalized eigenvector v such that (A − 4I )v is an eigenvector. This in turn requires that (A − 4I )2 v = 0. We ﬁnd that −2 −2 A − 4I = −2 −2 −4 22 −4 1 3 −2 −1 3 −6 33 4 8 −4 4 4 0 (A − 4I )2 = 4 0 4 4 8 −4 and 1 0 Note that the vector v = satisﬁes (A − 4I )2 v = 0 and (A − 4I )v = 0 1 of generalized eigenvectors corresponding to λ = 4 given by 1 0 10 , . 0 1 1 1 Hence, we can form the matrix 2 1 S= 0 2 0 1 2 0 0 1 1 1 1 0 0 1 2 0 JCF(A) = 0 0 and 27. Since A is upper triangular, the eigenvalues appear along at 0 0 nullspace(A − 2I ) = nullspace 0 0 0 0 2 0 0 −4 −4 . −4 −4 0 1 . Thus, we have the cycle 1 1 0 0 . 1 4 0 0 4 0 the main diagonal: λ = 2, 2, 2, 2, 2. Looking 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 1 1 0 , we see that the row-echelon form of this matrix will contain two pivots, and therefore, three unpivoted columns. That means that the eigenspace corresponding to λ = 2 is three-dimensional. Therefore, JCF(A) consists of three Jordan blocks. The only list of block sizes for a 5 × 5 matrix with three blocks are (a) 3,1,1 and (b) 2,2,1. In this case, note that (A − 2I ) = 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 1 1 1 1 = 05 , 451 so that it is possible to ﬁnd a vector v that generates a cycle of generalized eigenvectors of length 3: {(A − 2I )2 v, (A − 2I )v, v}. Thus, JCF(A) contains a Jordan block of size 3 × 3. We conclude that the correct list of block sizes for this matrix is 3,1,1: 21000 0 2 1 0 0 JCF(A) = 0 0 2 0 0 . 0 0 0 2 0 00002 28. Since A is upper triangular, the eigenvalues appear along the main diagonal: λ = 0, 0, 0, 0, 0. Looking at nullspace(A − 0I ) = nullspace(A), we see that eigenvectors (x, y, z, u, v ) corresponding to λ = 0 must satisfy z = u = v = 0 (since the third row gives 6u = 0, the ﬁrst row gives u + 4v = 0, and the second row gives z + u + v = 0). Thus, we have only two free variables, and thus JCF(A) will consist of two Jordan blocks. The only list of block sizes for a 5 × 5 matrix with two blocks are (a)4,1 and (b) 3,2. In this case, it is easy to verify that A3 = 0, so that the longest possible cycle of generalized eigenvectors {A2 v, Av, v} has length 3. Therefore, case (b) holds: JCF(A) consists of one Jordan block of size 3 × 3 and one Jordan block of size 2 × 2: 01000 0 0 1 0 0 JCF(A) = 0 0 0 0 0 . 0 0 0 0 1 00000 29. Since A is upper triangular, the eigenvalues appear Looking at 0 0 0 0 nullspace(A − I ) = nullspace 0 0 0 0 along the main diagonal: λ = 1, 1, 1, 1, 1, 1, 1, 1. 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 , we see that if (a, b, c, d, e, f, g, h) is an eigenvector of A, then b = d = f = h = 0, and a, c, e, and g are free variables. Thus, we have four linearly independent eigenvectors of A, and hence we expect four Jordan blocks. Now, an easy calculation shows that (A − I )2 = 0, and thus, no Jordan blocks of size greater than 2 × 2 are permissible. Thus, it must be the case that JCF(A) consists of four Jordan blocks, each of which is a 2 × 2 matrix: 11000000 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 JCF(A) = 0 0 0 0 1 1 0 0 . 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 00000001 452 30. NOT SIMILAR. We will compute JCF(A) and JCF(B ). If they are the same (up to a rearrangement of the Jordan blocks), then A and B are similar; otherwise they are not. Both matrices have eigenvalues λ = 6, 6, 6. Consider 1 10 nullspace(A − 6I ) = nullspace −1 −1 0 . 1 00 We see that eigenvectors (x, y, z ) corresponding to λ = 6 must satisfy x + y = 0 and x = 0. Therefore, x = y = 0, while z is a free variable. Since we obtain only one free variable, JCF(A) consists of just one Jordan block corresponding to λ = 6: 610 JCF(A) = 0 6 1 . 006 Next, consider 0 −1 1 0 0 . nullspace(B − 6I ) = nullspace 0 0 00 In this case, eigenvectors (x, y, z ) corresponding to λ = 6 must satisfy −y + z = 0. Therefore, both x and z are free variables, and hence, JCF(B ) consists of two Jordan blocks corresponding to λ = 6: 610 JCF(B ) = 0 6 0 . 006 Since JCF(A) = JCF(B ), we conclude that A and B are not similar. 31. SIMILAR. We will compute JCF(A) and JCF(B ). If they are the same (up to a rearrangement of the Jordan blocks), then A and B are similar; otherwise they are not. Both matrices have eigenvalues λ = 5, 5, 5. (For A, this is easiest to compute by expanding det(A − λI ) along the middle row, and for B , this is easiest to compute by expanding det(B − λI ) along the second column.) In Problem 23, we computed 510 JCF(A) = 0 5 1 . 005 Next, consider −2 −1 −2 1 1 , nullspace(B − 5I ) = nullspace 1 1 0 1 111 and this latter matrix can be row-reduced to 0 1 0 , which has only one unpivoted column. Thus, 000 the eigenspace of B corresponding to λ = 5 is only one-dimensional, and so 510 JCF(B ) = 0 5 1 . 005 Since A and B each had the same Jordan canonical form, they are similar matrices. 453 32. The eigenvalues of A are λ = −1, −1, and the eigenspace corresponding to λ = −1 is only onedimensional. Thus, we seek a generalized eigenvector v of A corresponding to λ = −1 such that {(A + I )v, v} −2 −2 is a cycle of generalized eigenvectors. Note that A + I = and (A + I )2 = 02 . Thus, every 2 2 1 nonzero vector v is a generalized eigenvector of A corresponding to λ = −1. Let us choose v = . Then 0 −2 (A + I )v = . Form the matrices 2 S= −2 2 1 0 and J= −1 1 0 −1 . Via the substitution x = S y, the system x = Ax is transformed into y = J y. The corresponding equations are y1 = −y1 + y2 and y2 = −y2 . The solution to the second equation is y2 (t) = c1 e−t . Substituting this solution into y1 = −y1 + y2 gives y1 + y1 = c1 e−t . This is a ﬁrst order linear equation with integrating factor I (t) = et . When we multiply the diﬀerential equation for y1 (t) by I (t), it becomes (y1 · et ) = c1 . Integrating both sides yields y1 · et = c1 t + c2 . Thus, y1 (t) = c1 te−t + c2 e−t . Thus, we have y(t) = y1 (t) y2 (t) = c1 te−t + c2 e−t c1 e−t . Finally, we solve for x(t): x(t) = S y(t) = = −2 2 1 0 c1 te−t + c2 e−t c1 e−t −2(c1 te−t + c2 e−t ) + c1 e−t 2(c1 tet + c2 e−t ) = c1 e−t −2t + 1 2t + c2 e−t −2 2 . This is an acceptable answer, or we can write the individual equations comprising the general solution: x1 (t) = −2(c1 te−t + c2 e−t ) + c1 e−t and x2 (t) = 2(c1 te−t + c2 e−t ). 33. The eigenvalues of A are λ = −1, −1, 1. The eigenspace corresponding to λ = −1 is 110 nullspace(A + I ) = nullspace 0 1 1 , 110 454 which is only one-dimensional, spanned by the vector (1, −1, 1). Therefore, we seek a generalized eigenvector v of A corresponding to λ = −1 such that {(A + I )v, v} is a cycle of generalized eigenvectors. Note that 110 121 A + I = 0 1 1 and (A + I )2 = 1 2 1 . 110 121 In order that v be a generalized eigenvector of A corresponding to λ = −1, we should choose v such that 1 (A + I )2 v = 0 and (A + I )v = 0. There are many valid choices; let us choose v = 0 . Then −1 1 (A + I )v = −1 . Hence, we obtain the cycle of generalized eigenvectors corresponding to λ = −1: 1 1 1 −1 , 0 . −1 1 Next, consider the eigenspace corresponding to λ = 1. For this, we compute −1 1 0 1 . nullspace(A − I ) = nullspace 0 −1 1 1 −2 1 −1 0 1 1 −1 . We ﬁnd the eigenvector 1 This can be row-reduced to 0 0 0 0 1 Hence, we are ready to form the matrices S and J : −1 1 1 11 0 1 and J = 0 −1 S = −1 0 0 1 −1 1 as a basis for this eigenspace. 0 0 . 1 Via the substitution x = S y, the system x = Ax is transformed into y = J y. The corresponding equations are y1 = −y1 + y2 , y2 = −y2 , y3 = y3 . The third equation has solution y3 (t) = c3 et , the second equation has solution y2 (t) = c2 e−t , and so the ﬁrst equation becomes y1 + y1 = c2 e−t . This is a ﬁrst-order linear equation with integrating factor I (t) = et . When we multiply the diﬀerential equation for y1 (t) by I (t), it becomes (y1 · et ) = c2 . Integrating both sides yields y1 · et = c2 t + c1 . Thus, y1 (t = c2 te−t + c1 e−t . Thus, we have y1 (t) c2 te−t + c1 e−t . c2 e−t y(t) = y2 (t) = t y3 (t) c3 e 455 Finally, we solve for x(t): 1 11 c2 te−t + c1 e−t 0 1 c2 e−t x(t) = S y(t) = −1 1 −1 1 c3 et c2 te−t + c1 e−t + c2 e−t + c3 et −c2 te−t − c1 e−t + c3 et = c2 te−t + c1 e−t − c2 e−t + c3 et 1 t+1 1 = c1 e−t −1 + c2 e−t −t + c3 et 1 . 1 t−1 1 34. The eigenvalues of A are λ = −2, −2, −2. The eigenspace corresponding to λ = −2 is 0 0 0 nullspace(A + 2I ) = nullspace 1 −1 −1 , −1 1 1 and there are two linearly independent vectors in this nullspace, corresponding to the unpivoted columns of the row-echelon form of this matrix. Therefore, the Jordan canonical form of A is −2 1 0 0 . J = 0 −2 0 0 −2 To form an invertible matrix S such that S −1 AS = J , we must ﬁnd a cycle of generalized eigenvectors corresponding to λ = −2 of length 2: {(A + 2I )v, v}. Now 0 0 0 A + 2I = 1 −1 −1 and (A + 2I )2 = 03 . −1 1 1 Since (A + 2I )2 = 03 , every nonzero vector in R3 is a generalized eigenvector corresponding to λ = −2. We need only ﬁnd a nonzero vector v such that (A + 2I )v = 0. There are many valid choices; let us choose 1 0 v = 0 . Then (A + 2I )v = 1 , an eigenvector of A corresponding to λ = −2. We also need a 0 −1 second linearly independent eigenvector corresponding to λ = −2. There are many choices; let us choose 1 1 . Therefore, we can form the matrix 0 011 S = 1 0 1 . −1 0 0 Via the substitution x = S y, the system x = Ax is transformed into y = J y. The corresponding equations are y1 = −2y1 + y2 , y2 = −2y2 , y3 = −2y3 . 456 The third equation has solution y3 (t) = c3 e−2t , the second equation has solution y2 (t) = c2 e−2t , and so the ﬁrst equation becomes y1 + 2y1 = c2 e−2t . This is a ﬁrst-order linear equation with integrating factor I (t) = e2t . When we multiply the diﬀerential equation for y1 (t) by I (t), it becomes (y1 · e2t ) = c2 . Integrating both sides yields y1 · e2t = c2 t + c1 . Thus, y1 (t) = c2 te−2t + c1 e−2t . Thus, we have y1 (t) c2 te−2t + c1 e−2t . c2 e−2t y(t) = y2 (t) = −2t y3 (t) c3 e Finally, we solve for x(t): c2 te−2t + c1 e−2t 1 c2 e−2t 1 0 c3 e−2t c2 e−2t + c3 e−2t = c2 te−2t + c1 e−2t + c3 e−2t −(c2 te−2t + c1 e−2t ) 1 1 0 = c1 e−2t 1 + c2 e−2t t + c3 e−2t 1 . 0 −t −1 0 x(t) = S y(t) = 1 −1 1 0 0 35. The eigenvalues of A are λ = 4, 4, 4. The eigenspace corresponding to λ = 4 is 000 nullspace(A − 4I ) = nullspace 1 0 0 , 010 and there is only one eigenvector. Therefore, the Jordan 41 J = 0 4 00 canonical form of A is 0 1 . 4 Next, we need to ﬁnd an invertible matrix S such that S −1 AS = J . To do this, we must ﬁnd a cycle of generalized eigenvectors {(A − 4I )2 v, (A − 4I )v, v} of length 3 corresponding to λ = 4. We have 000 000 A − 4I = 1 0 0 and (A − 4I )2 = 0 0 0 and (A − 4I )3 = 03 . 010 100 From (A − 4I )3 = 03 , we know that every nonzero vector is a generalized eigenvector corresponding to 1 λ = 4. We choose v = 0 (any multiple of the chosen vector v would be acceptable as well). Thus, 0 457 0 0 (A − 4I )v = 1 and (A − 4I )2 v = 0 . Thus, we have the cycle of generalized eigenvectors 0 1 0 1 0 0 , 1 , 0 . 1 0 0 Thus, we can form the matrix 0 S= 0 1 0 1 0 1 0 . 0 Via the substitution x = S y, the system x = Ax is transformed into y = J y. The corresponding equations are y1 = 4y1 + y2 , y 2 = 4 y2 + y 3 , y3 = 4y3 . The third equation has solution y3 (t) = c3 e4t , and the second equation becomes y2 − 4y2 = c3 e4t . This is a ﬁrst-order linear equation with integrating factor I (t) = e−4t . When we multiply the diﬀerential equation for y2 (t) by I (t), it becomes (y2 · e−4t ) = c3 . Integrating both sides yields y2 · e−4t = c3 t + c2 . Thus, y2 (t) = c3 te4t + c2 e4t = e4t (c3 t + c2 ). Therefore, the diﬀerential equation for y1 (t) becomes y1 − 4y1 = e4t (c3 t + c2 ). This equation is ﬁrst-order linear with integrating factor I (t) = e−4t . When we multiply the diﬀerential equation for y1 (t) by I (t), it becomes (y1 · e−4t ) = c3 t + c2 . Integrating both sides, we obtain y1 · e−4t = c3 t2 + c2 t + c1 . 2 Hence, y1 (t) = e4t c3 t2 + c2 t + c1 . 2 Thus, we have 4t 2 y1 (t) e c3 t2 + c2 t + c1 y(t) = y2 (t) = e4t (c3 t + c2 ) y3 (t) c3 e4t . 458 Finally, we solve for x(t): 4t 2 1 e c3 t2 + c2 t + c1 0 e4t (c3 t + c2 ) 0 c3 e4t c3 e4t e4t (c3 t + c2 ) 0 x(t) = S y(t) = 0 1 0 1 0 = 2 e4t c3 t2 + c2 t + c1 0 0 1 = c1 e4t 0 + c2 e4t 1 + c3 e4t t . 1 t t2 /2 36. The eigenvalues of A are λ = −3, −3. The eigenspace corresponding to λ = −3 is only one-dimensional, and therefore the Jordan canonical form of A contains one 2 × 2 Jordan block: −3 1 0 −3 J= . Next, we look for a cycle of generalized eigenvectors of the form {(A + 3I )v, v}, where v is a generalized 2 1 −1 eigenvector of A. Since (A +3I )2 = = 02 , every nonzero vector in R2 is a generalized eigenvector. 1 −1 1 1 Let us choose v = . Then (A + 3I )v = . Thus, we form the matrix 0 1 1 1 S= 1 0 . Via the substitution x = S y, the system x = Ax is transformed into y = J y. The corresponding equations are y1 = −3y1 + y2 and y2 = −3y2 . The second equation has solution y2 (t) = c2 e−3t . Substituting this expression for y2 (t) into the diﬀerential equation for y1 (t) yields y1 + 3y1 = c2 e−3t . An integrating factor for this ﬁrst-order linear diﬀerential equation is I (t) = e3t . Multiplying the diﬀerential equation for y1 (t) by I (t) gives us (y1 · e3t ) = c2 . Integrating both sides, we obtain y1 · e3t = c2 t + c1 . Thus, y1 (t) = c2 te−3t + c1 e−3t . Thus, y(t) = y1 (t) y2 (t) = c2 te−3t + c1 e−3t c2 e−3t . Finally, we solve for x(t): x(t) = S y(t) = = 1 1 c2 te−3t + c1 e−3t c2 e−3t 1 0 c2 te−3t + c1 e−3t + c2 e−3t c2 te−3t + c1 e−3t = c1 e−3t 1 1 + c2 e−3t t+1 t . 459 Now, we must apply the initial condition: 0 −1 1 1 = x(0) = c1 + c2 1 0 . Therefore, c1 = −1 and c2 = 1. Hence, the unique solution to the given initial-value problem is x(t) = −e−3t 1 1 + e−3t t+1 t . 37. Let J = JCF(A) = JCF(B ). Thus, there exist invertible matrices S and T such that S −1 AS = J and T −1 BT = J. Thus, S −1 AS = T −1 BT, and so B = T S −1 AST −1 = (ST −1 )−1 A(ST −1 ), which implies by deﬁnition that A and B are similar matrices. 38. Since the characteristic polynomial has degree 3, we know that A is a 3 × 3 matrix. Moreover, the roots of the characteristic equation has roots λ = 0, 0, 0. Hence, the Jordan canonical form J of A must be one of the three below: 010 010 000 0 0 0 , 0 0 0 , 0 0 1 . 000 000 000 In all three cases, note that J 3 = 03 . Moreover, there exists an invertible matrix S such that S −1 AS = J . Thus, A = SJS −1 , and so A3 = (SJS −1 )3 = SJ 3 S −1 = S 03 S −1 = 03 , which implies that A is nilpotent. 39. (a): Let J be an n × n Jordan block with eigenvalue λ. Then the eigenvalues of J T are λ (with multiplicity n). The matrix J T − λI consists of 1’s on the subdiagonal (the diagonal parallel and directly beneath the main diagonal) and zeros elsewhere. Hence, the null space of J T − λI is one-dimensional (with a free variable corresponding to the right-most column of J T − λI ). Therefore, the Jordan canonical form of J T consists of a single Jordan block, since there is only one linearly independent eigenvector corresponding to the eigenvalue λ. However, a single Jordan block with eigenvalue λ is precisely the matrix J . Therefore, JCF(J T ) = J. (b): Let JCF(A) = J . Then there exists an invertible matrix S such that S −1 AS = J . Transposing both sides, we obtain (S −1 AS )T = J T , or S T AT (S −1 )T = J T , or S T AT (S T )−1 = J T . Hence, the matrix AT is similar to J T . However, by applying part (a) to each block in J T , we ﬁnd that JCF(J T ) = J . Hence, J T is similar to J . By Problem 26 in Section 5.8, we conclude that AT is similar to J . Hence, AT and J have the same Jordan canonical form. However, since JCF(J ) = J , we deduce that JCF(AT ) = J = JCF(A), as required. Solutions to Section 5.12 460 Problems: 1. NO. Note that T (1, 1) = (2, 0, 0, 1) and T (2, 2) = (4, 0, 0, 4) = 2T (1, 1). Thus, T is not a linear transformation. 2. YES. The function T can be represented by the matrix function T (x) = Ax, where A= 2 −3 0 −1 00 and x is a vector in R3 . Every matrix transformation of the form T (x) = Ax is linear. Since the domain of T has larger dimension than the codomain of T , T cannot be one-to-one. However, since T (0, −1/3, 0) = (1, 0) and T (−1, −2/3, 0) = (0, 1), we see that T is onto. Thus, Rng(T ) = R2 (2-dimensional), and so a basis for Rng(T ) is {(1, 0), (0, 1)}. The kernel of T consists of vectors of the form (0, 0, z ), and hence, a basis for Ker(T ) is {(0, 0, 1)} and Ker(T ) is 1-dimensional. 3. YES. The function T can be represented by the matrix function T (x) = Ax, where A= 0 0 −3 2 −1 5 and x is a vector in R3 . Every matrix transformation of the form T (x) = Ax is linear. Since the domain of T has larger dimension than the codomain of T , T cannot be one-to-one. However, since T (1, 1/3, −1/3) = (1, 0) and T (1, 1, 0) = (0, 1), we see that T is onto. Thus, Rng(T ) = R2 , and so a basis for Rng(T ) is {(1, 0), (0, 1)}, and Rng(T ) is 2-dimensional. The kernel of T consists of vectors of the form (t, 2t, 0), where t ∈ R, and hence, a basis for Ker(T ) is {(1, 2, 0)}. We have that Ker(T ) is 1-dimensional. 4. YES. The function T is a linear transformation, because if g, h ∈ C [0, 1], then T (g + h) = ((g + h)(0), (g + h)(1)) = (g (0) + h(0), g (1) + h(1)) = (g (0), g (1)) + (h(0), h(1)) = T (g ) + T (h), and if c is a scalar, T (cg ) = ((cg )(0), (cg )(1)) = (cg (0), cg (1)) = c(g (0), g (1)) = cT (g ). Note that any function g ∈ C [0, 1] for which g (0) = g (1) = 0 (such as g (x) = x2 − x) belongs to Ker(T ), and hence, T is not one-to-one. However, given (a, b) ∈ R2 , note that g deﬁned by g (x) = a + (b − a)x satisﬁes T (g ) = (g (0), g (1)) = (a, b), 461 so T is onto. Thus, Rng(T ) = R2 , with basis {(1, 0), (0, 1)} (2-dimensional). Now, Ker(T ) is inﬁnitedimensional. We cannot list a basis for this subspace of C [0, 1]. 5. YES. The function T can be represented by the matrix function T (x) = Ax, where 1/5 A= 1/5 and x is a vector in R2 . Every such matrix transformation is linear. Since the domain of T has larger dimension than the codomain of T , T cannot be one-to-one. However, T (5, 0) = 1, so we see that T is onto. Thus, Rng(T ) = R, a 1-dimensional space with basis {1}. The kernel of T consists of vectors of the form t(1, −1), and hence, a basis for Ker(T ) is {(1, −1)}. We have that Ker(T ) is 1-dimensional. 6. NO. For instance, note that T 1 1 0 1 = (1, 0) T 1 1 0 1 +T 1 0 1 1 + 1 0 1 1 and T 1 0 1 1 = (0, 1), and so = (1, 0) + (0, 1) = (1, 1). However, T 1 1 0 1 =T 2 1 1 2 = (2, 2). Thus, T does not respect addition, and hence, T is not a linear transformation. Similar work could be given to show that T also fails to respect scalar multiplication. 7. YES. We can verify that T respects addition and scalar multiplication as follows: T respects addition: Let a1 + b1 x + c1 x2 and a2 + b2 x + c2 x2 belong to P2 . Then T ((a1 + b1 x + c1 x2 ) + (a2 + b2 x + c2 x2 )) = T ((a1 + a2 ) + (b1 + b2 )x + (c1 + c2 )x2 ) = −(a1 + a2 ) − (b1 + b2 ) 0 3(c1 + c2 ) − (a1 + a2 ) −2(b1 + b2 ) = −a1 − b1 3c1 − a1 0 −2b1 + −a2 − b2 3c2 − a2 0 −2b2 = T (a1 + b1 x + c1 x2 ) + T (a2 + b2 x + c2 x2 ). T respects scalar multiplication: Let a + bx + cx2 belong to P2 and let k be a scalar. Then we have T (k (a + bx + cx2 )) = T ((ka) + (kb)x + (kc)x2 ) = = =k −ka − kb 0 3(kc) − (ka) −2(kb) k (−a − b) 0 k (3c − a) k (−2b) −a − b 3c − a 0 −2b = kT (a + bx + cx2 ). 462 Next, observe that a + bx + cx2 belongs to Ker(T ) if and only if −a − b = 0, 3c − a = 0, and −2b = 0. These equations require that b = 0, a = 0, and c = 0. Thus, Ker(T ) = {0}, which implies that Ker(T ) is 0-dimensional (with basis ∅), and that T is one-to-one. However, since M2 (R) is 4-dimensional and P2 is only 3-dimensional, we see immediately that T cannot be onto. By the Rank-Nullity Theorem, in fact, Rng(T ) must be 3-dimensional, and a basis is given by −1 −1 Basis for Rng(T ) = {T (1), T (x), T (x2 )} = 0 0 −1 0 0 −2 , 0 3 , 0 0 . 8. YES. We can verify that T respects addition and scalar multiplication as follows: T respects addition: Let A, B belong to M2 (R). Then T (A + B ) = (A + B ) + (A + B )T = A + B + AT + B T = (A + AT ) + (B + B T ) = T (A) + T (B ). T respects scalar multiplication: Let A belong to M2 (R), and let k be a scalar. Then T (kA) = (kA) + (kA)T = kA + kAT = k (A + AT ) = kT (A). Thus, T is a linear transformation. Note that if A is any skew-symmetric matrix, then T (A) = A + AT = A + (−A) = 0, so Ker(T ) consists precisely of the 2 × 2 skew-symmetric matrices. These matrices take the 0 −a 0 −1 form , for a constant a, and thus a basis for Ker(T ) is given by , and Ker(T ) is a 0 1 0 1-dimensional. Consequently, T is not one-to-one. Therefore, by Proposition 5.4.13, T also fails to be onto. In fact, by the Rank-Nullity Theorem, Rng(T ) must be 3-dimensional. A typical element of the range of T takes the form T ab cd = ab cd ac bd + = 2a b+c c+b 2d . The characterizing feature of this matrix is that it is symmetric. So Rng(T ) consists of all 2 × 2 symmetric matrices, and hence a basis for Rng(T ) is 1 0 0 0 , 0 1 1 0 , 0 0 0 1 . 9. YES. We can verify that T respects addition and scalar multiplication as follows: T respects addition: Let (a1 , b1 , c1 ) and (a2 , b2 , c2 ) belong to R3 . Then T ((a1 , b1 , c1 ) + (a2 , b2 , c2 )) = T (a1 + a2 , b1 + b2 , c1 + c2 ) = (a1 + a2 )x2 + (2(b1 + b2 ) − (c1 + c2 ))x + (a1 + a2 − 2(b1 + b2 ) + (c1 + c2 )) = [a1 x2 + (2b1 − c1 )x + (a1 − 2b1 + c1 )] + [a2 x2 + (2b2 − c2 )x + (a2 − 2b2 + c2 )] = T ((a1 , b1 , c1 )) + T ((a2 , b2 , c2 )). T respects scalar multiplication: Let (a, b, c) belong to R3 and let k be a scalar. Then T (k (a, b, c)) = T (ka, kb, kc) = (ka)x2 + (2kb − kc)x + (ka − 2kb + kc) = k (ax2 + (2b − c)x + (a − 2b + c)) = kT ((a, b, c)). 463 Thus, T is a linear transformation. Now, (a, b, c) belongs to Ker(T ) if and only if a = 0, 2b − c = 0, and a − 2b + c = 0. These equations collectively require that a = 0 and 2b = c. Setting c = 2t, we ﬁnd that b = t. Hence, (a, b, c) belongs to Ker(T ) if and only if (a, b, c) has the form (0, t, 2t) = t(0, 1, 2). Hence, {(0, 1, 2)} is a basis for Ker(T ), which is therefore 1-dimensional. Hence, T is not one-to-one. By Proposition 5.4.13, T is also not onto. In fact, the Rank-Nullity Theorem implies that Rng(T ) must be 2-dimensional. It is spanned by {T (1, 0, 0), T (0, 1, 0), T (0, 0, 1)} = {x2 + 1, 2x − 2, −x + 1}, but the last two polynomials are proportional to each other. Omitting the polynomial 2x − 2 (this is an arbitrary choice; we could have omitted −x + 1 instead), we arrive at a basis for Rng(T ): {x2 + 1, −x + 1}. 10. YES. We can verify that T respects addition and scalar multiplication as follows: T respects addition: Let (x1 , x2 , x3 ) and (y1 , y2 , y3 ) be vectors in R3 . Then T ((x1 , x2 , x3 ) + (y1 , y2 , y3 )) = T (x1 + y1 , x2 + y2 , x3 + y3 ) = 0 (x1 + y1 ) − (x2 + y2 ) + (x3 + y3 ) −(x1 + y1 ) + (x2 + y2 ) − (x3 + y3 ) 0 = 0 −x1 + x2 − x3 x1 − x2 + x3 0 + 0 −y1 + y2 − y3 y1 − y2 + y 3 0 = T ((x1 , x2 , x3 )) + T ((y1 , y2 , y3 )). T respects scalar multiplication: Let (x1 , x2 , x3 ) belong to R3 and let k be a scalar. Then T (k (x1 , x2 , x3 )) = T (kx1 , kx2 , kx3 ) = =k 0 (kx1 ) − (kx2 ) + (kx3 ) −(kx1 ) + (kx2 ) − (kx3 ) 0 0 −x1 + x2 − x3 x1 − x2 + x3 0 = kT ((x1 , x2 , x3 )). Thus, T is a linear transformation. Now, (x1 , x2 , x3 ) belongs to Ker(T ) if and only if x1 − x2 + x3 = 0 and −x1 + x2 − x3 = 0. Of course, the latter equation is equivalent to the former, so the kernel of T consists simply of ordered triples (x1 , x2 , x3 ) with x1 − x2 + x3 = 0. Setting x3 = t and x2 = s, we have x1 = s − t, so a typical element of Ker(T ) takes the form (s − t, s, t), where s, t ∈ R. Extracting the free variables, we ﬁnd a basis for Ker(T ): {(1, 1, 0), (−1, 0, 1)}. Hence, Ker(T ) is 2-dimensional. By the Rank-Nullity Theorem, Rng(T ) must be 1-dimensional. In fact, Rng(T ) consists precisely of the 0 −1 set of 2 × 2 skew-symmetric matrices, with basis . Since M2 (R) is 4-dimensional, T fails to be 1 0 onto. 11. We have T (x, y, z ) = (−x + 8y, 2x − 2y − 5z ). 12. We have T (x, y ) = (−x + 4y, 2y, 3x − 3y, 3x − 3y, 2x − 6y ). 13. We have T (x) = x x T (2) = (−1, 5, 0, −2) = 2 2 x 5x − , , 0, −x . 22 464 ab cd 14. For an arbitrary 2 × 2 matrix ab cd 1 0 =r , if we write 0 1 0 1 +s 1 0 1 0 +t 0 0 1 0 +u 1 0 , we can solve for r, s, t, u to ﬁnd r = d, t = a − b + c − d, s = c, u = b − c. Thus, T ab cd =T 1 0 d = dT 1 0 0 1 0 1 +c + cT 0 1 1 0 + (a − b + c − d) 0 1 1 0 + (a − b + c − d)T 1 0 0 0 1 0 + (b − c) 0 0 + (b − c)T 1 0 1 0 1 0 1 0 = d(2, −5) + c(0, −3) + (a − b + c − d)(1, 1) + (b − c)(−6, 2) = (a − 7b + 7c + d, a + b − 4c − 6d). 15. For an arbitrary element ax2 + bx + c in P2 , if we write ax2 + bx + c = r(x2 − x − 3) + s(2x + 5) + 6t = rx2 + (−r + 2s)x + (−3r + 5s + 6t), we can solve for r, s, t to ﬁnd r = a, s = 1 (a + b), and t = 2 1 12 a − 5 12 b + 1 c. Thus, 6 1 5 1 a(x2 − x − 3) + (a + b)(2x + 5) + a − b + c 2 2 2 1 1 5 c = aT (x2 − x − 3) + (a + b)T (2x + 5) + a− b+ T (6) 2 12 12 6 1 1 5 c −2 1 0 1 12 6 =a + (a + b) + a− b+ −4 −1 2 −2 6 18 2 12 12 6 T (ax2 + bx + c) = T = −a − 5b + 2c 2a − 2b + c 5 − 2 a − 3 b + c − 1 a − 17 b + 3c 2 2 2 . 16. Since dim[P5 ] = 6 and dim[M2 (R)] = 4 = dim[Rng(T )] (since T is onto), the Rank-Nullity Theorem gives dim[Ker(T )] = 6 − 4 = 2. 17. Since T is one-to-one, dim[Ker(T )] = 0, so the Rank-Nullity Theorem gives dim[Rng(T )] = dim[M2×3 (R)] = 6. 18. Since A is lower triangular, its eigenvalues lie along the main diagonal, λ1 = 3 and λ2 = −1. Since A is 2 × 2 with two distinct eigenvalues, A is diagonalizable. To get an invertible matrix S such that S −1 AS = D, we need to ﬁnd an eigenvector associated with each eigenvector: Eigenvalue λ1 = 3: To get an eigenvector, we consider nullspace(A − 3I ) = nullspace 0 16 0 −4 , 465 1 4 and we see that one possible eigenvector is . Eigenvalue λ2 = −1: To get an eigenvector, we consider nullspace(A + I ) = nullspace 0 1 and we see that one possible eigenvector is 4 16 0 0 , 3 0 0 −1 . . Putting the above results together, we form S= 1 4 0 1 and D= 19. To compute the eigenvalues, we ﬁnd the characteristic equation det(A − λI ) = det 13 − λ 25 −9 −17 − λ = (13 − λ)(−17 − λ) + 225 = λ2 + 4λ + 4, and the roots of this equation are λ = −2, −2. Eigenvalue λ = 2: We compute nullspace(A − 2I ) = nullspace 15 −9 25 −15 , but since there is only one linearly independent solution to the corresponding system (one free variable), the eigenvalue λ = 2 does not have two linearly independent solutions. Hence, A is not diagonalizable. 20. To compute the eigenvalues, we ﬁnd the characteristic equation −4 − λ 3 0 = (−1 − λ) [(−4 − λ)(5 − λ) + 18] 5−λ 0 det(A − λI ) = det −6 3 −3 −1 − λ = (−1 − λ)(λ2 − λ − 2) = −(λ + 1)2 (λ − 2), so the eigenvalues are λ1 = −1 and λ2 = 2. Eigenvalue λ1 = −1: To get eigenvectors, we consider −3 30 1 −1 0 6 0 ∼ 0 0 0 . nullspace(A + I ) = nullspace −6 3 −3 0 0 00 There are two free variables, z = t and y = s. From the ﬁrst equation x = s. Thus, two linearly independent eigenvectors can be obtained corresponding to λ1 = −1: 1 0 1 and 0 . 0 1 466 Eigenvalue λ2 = 2: To get an eigenvector, we consider −6 3 0 1 −1 −1 3 0 ∼ 0 1 2 . nullspace(A − 2I ) = nullspace −6 3 −3 −3 0 0 0 We let z = t. Then y = −2t from the middle line, and x = −t from the top line. Thus, an eigenvector corresponding to λ2 = 2 may be chosen as −1 −2 . 1 Putting the above results together, we −1 S = −2 1 form 1 1 0 0 0 1 and 2 0 0 0 . D = 0 −1 0 0 −1 21. To compute the eigenvalues, we ﬁnd the characteristic equation 1−λ 1 0 = (−2 − λ) [(1 − λ)(5 − λ) + 4] 5−λ 0 det(A − λI ) = det −4 17 −11 −2 − λ = (−2 − λ)(λ2 − 6λ + 9) = (−2 − λ)(λ − 3)2 , so the eigenvalues are λ1 = 3 and λ2 = −2. Eigenvalue λ1 = 3: To get eigenvectors, we consider 1 −3 −5 1 −3 −5 −2 1 0 −2 1 0 1 0 ∼ 0 1 2 . 2 0 ∼ 17 −11 −5 ∼ −2 nullspace(A−3I ) = nullspace −4 17 −11 −5 0 0 0 0 0 0 0 0 0 The latter matrix contains only one unpivoted column, so that only one linearly independent eigenvector can be obtained. However, λ1 = 3 occurs with multiplicity 2 as a root of the characteristic equation for the matrix. Therefore, the matrix is not diagonalizable. 22. We are given that the only eigenvalue of A is λ = 2. Eigenvalue λ = 2: To get eigenvectors, we consider −3 −1 3 1 0 −1 1 2 −4 ∼ −3 −1 3 ∼ 0 nullspace(A − 2I ) = nullspace 4 −1 0 1 4 2 −4 0 0 −1 1 0 . 2 0 We see that only one unpivoted column will occur in a row-echelon form of A − 2I , and thus, only one linearly independent eigenvector can be obtained. Since the eigenvalue λ = 2 occurs with multiplicity 3 as a root of the characteristic equation for the matrix, the matrix is not diagonalizable. 23. We are given that the eigenvalues of A are λ1 = 4 and λ2 = −1. 467 Eigenvalue λ1 = 4: We consider 5 5 −5 0 . nullspace(A − 4I ) = nullspace 0 −5 10 5 −10 The middle row tells us that nullspace vectors (x, y, z ) must have y = 0. From this information, the ﬁrst and last rows of the matrix tell us the same thing: x = z . Thus, an eigenvector corresponding to λ1 = 4 may be chosen as 1 0 . 1 Eigenvalue λ2 = −1: We consider 10 5 −5 10 5 −5 0 . 0 ∼ 0 0 nullspace(A + I ) = nullspace 0 0 00 0 10 5 −5 From the ﬁrst row, a vector (x, y, z ) in the nullspace must satisfy 10x + 5y − 5z = 0. Setting z = t and y = s, 1 we get x = 2 t − 1 s. Hence, the eigenvectors corresponding to λ2 = −1 take the form ( 1 t − 1 s, s, t), and so 2 2 2 a basis for this eigenspace is −1/2 1/2 0 , 1 . 0 1 Putting the above results together, 1 S= 0 1 we form 1/2 −1/2 0 1 1 0 and 4 0 0 0 . D = 0 −1 0 0 −1 28. We will compute the dimension of each of the two eigenspaces associated For λ1 = 1, we compute as follows: 4 8 16 1 0 8 ∼ 0 nullspace(A − I ) = nullspace 4 −4 −4 −12 0 with the matrix A. 2 1 0 4 1 , 0 which has only one unpivoted column. Thus, this eigenspace is 1-dimensional. For λ2 = −3, we compute as follows: 8 8 16 11 4 8 ∼ 0 0 nullspace(A + 3I ) = nullspace 4 −4 −4 −8 00 2 0 , 0 which has two unpivoted columns. Thus, this eigenspace is 2-dimensional. Between the two eigenspaces, we have a complete set of linearly independent eigenvectors. Hence, the matrix A in this case is diagonalizable. Therefore, A is diagonalizable, and we may take 1 0 0 0 . J = 0 −3 0 0 −3 468 (Of course, the eigenvalues of A may be listed in any order along the main diagonal of the Jordan canonical form, thus yielding other valid Jordan canonical forms for A.) 29. We will compute the dimension of each of the two eigenspaces associated with the matrix A. For λ1 = −1, we compute as follows: 3 nullspace(A + I ) = nullspace 2 −1 1 1 10 1 2 −2 ∼ 0 1 −2 , 0 −1 00 0 which has one unpivoted column. Thus, this eigenspace is 1-dimensional. For λ2 = 3, we compute as follows: 1 −1 −1 −1 1 1 1 6 , nullspace(A − 3I ) = nullspace 2 −2 −2 ∼ 0 −1 0 −5 0 0 0 which has one unpivoted column. Thus, this eigenspace is also one-dimensional. Since we have only generated two linearly independent eigenvectors from the eigenvalues of A, we know that A is not diagonalizable, and hence, the Jordan canonical form of A is not a diagonal matrix. We must have one 1 × 1 Jordan block and one 2 × 2 Jordan block. To determine which eigenvalue corresponds to the 1 × 1 block and which corresponds to the 2 × 2 block, we must determine the multiplicity of the eigenvalues as roots of the characteristic equation of A. A short calculation shows that λ1 = −1 occurs with multiplicity 2, while λ2 = 3 occurs with multiplicity 1. Thus, the Jordan canonical form of A is −1 10 J = 0 −1 0 . 0 03 30. There are 3 diﬀerent possible Jordan canonical forms, up to a rearrangement of the Jordan blocks: Case 1: −1 0 00 0 −1 0 0 . J = 0 0 −1 0 0 0 02 In this case, the matrix has four linearly independent eigenvectors, and because all Jordan blocks have size 1 × 1, the maximum length of a cycle of generalized eigenvectors for this matrix is 1. Case 2: −1 1 00 0 −1 0 0 . J = 0 0 −1 0 0 0 02 In this case, the matrix has three linearly independent eigenvectors (two corresponding to λ = −1 and one corresponding to λ = 2). There is a Jordan block of size 2 × 2, and so a cycle of generalized eigenvectors can have a maximum length of 2 in this case. 469 Case 3: −1 1 00 0 −1 1 0 . J = 0 0 −1 0 0 0 02 In this case, the matrix has two linearly independent eigenvectors (one corresponding to λ = −1 and one corresponding to λ = 2). There is a Jordan block of size 3 × 3, and so a cycle of generalized eigenvectors can have a maximum length of 3 in this case. 31. There are 7 diﬀerent possible Jordan canonical forms, up to a rearrangement of the Jordan blocks: Case 1: J = 4 0 0 0 0 0 4 0 0 0 0 0 4 0 0 0 0 0 4 0 0 0 0 0 4 . In this case, the matrix has ﬁve linearly independent eigenvectors, and because all Jordan blocks have size 1 × 1, the maximum length of a cycle of generalized eigenvectors for this matrix is 1. Case 2: J = 4 0 0 0 0 1 4 0 0 0 0 0 4 0 0 0 0 0 4 0 0 0 0 0 4 . In this case, the matrix has four linearly independent eigenvectors, and because the largest Jordan block is of size 2 × 2, the maximum length of a cycle of generalized eigenvectors for this matrix is 2. Case 3: J = 4 0 0 0 0 1 4 0 0 0 0 0 4 0 0 0 0 1 4 0 0 0 0 0 4 . In this case, the matrix has three linearly independent eigenvectors, and because the largest Jordan block is of size 2 × 2, the maximum length of a cycle of generalized eigenvectors for this matrix is 2. Case 4: J = 4 0 0 0 0 1 4 0 0 0 0 1 4 0 0 0 0 0 4 0 0 0 0 0 4 . In this case, the matrix has three linearly independent eigenvectors, and because the largest Jordan block is of size 3 × 3, the maximum length of a cycle of generalized eigenvectors for this matrix is 3. 470 Case 5: J = 4 0 0 0 0 1 4 0 0 0 0 1 4 0 0 0 0 0 4 0 0 0 0 1 4 . In this case, the matrix has two linearly independent eigenvectors, and because the largest Jordan block is of size 3 × 3, the maximum length of a cycle of generalized eigenvectors for this matrix is 3. Case 6: J = 4 0 0 0 0 1 4 0 0 0 0 1 4 0 0 0 0 1 4 0 0 0 0 0 4 . In this case, the matrix has two linearly independent eigenvectors, and because the largest Jordan block is of size 4 × 4, the maximum length of a cycle of generalized eigenvectors for this matrix is 4. Case 7: J = 4 0 0 0 0 1 4 0 0 0 0 1 4 0 0 0 0 1 4 0 0 0 0 1 4 . In this case, the matrix has only one linearly independent eigenvector, and because the largest Jordan block is of size 5 × 5, the maximum length of a cycle of generalized eigenvectors for this matrix is 5. 32. There are 10 diﬀerent possible Jordan canonical forms, up to a rearrangement of the Jordan blocks: Case 1: J = 6 0 0 0 0 0 0 6 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 −3 0 0 0 −3 . In this case, the matrix has six linearly independent eigenvectors, and because all Jordan blocks have size 1 × 1, the maximum length of a cycle of generalized eigenvectors for this matrix is 1. Case 2: J = 6 0 0 0 0 0 1 6 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 −3 0 0 0 −3 . In this case, the matrix has ﬁve linearly independent eigenvectors (three corresponding to λ = 6 and two corresponding to λ = −3), and because the largest Jordan block is of size 2 × 2, the maximum length of a cycle of generalized eigenvectors for this matrix is 2. 471 Case 3: J = 6 0 0 0 0 0 1 6 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 1 0 0 6 0 0 0 −3 0 0 0 −3 . In this case, the matrix has four linearly independent eigenvectors (two corresponding to λ = 6 and two corresponding to λ = −3), and because the largest Jordan block is of size 2 × 2, the maximum length of a cycle of generalized eigenvectors for this matrix is 2. Case 4: J = 6 0 0 0 0 0 1 6 0 0 0 0 0 1 6 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 −3 0 0 0 −3 . In this case, the matrix has four linearly independent eigenvectors (two corresponding to λ = 6 and two corresponding to λ = −3), and because the largest Jordan block is of size 3 × 3, the maximum length of a cycle of generalized eigenvectors for this matrix is 3. Case 5: J = 6 0 0 0 0 0 1 6 0 0 0 0 0 1 6 0 0 0 0 0 0 0 0 0 1 0 0 6 0 0 0 −3 0 0 0 −3 . In this case, the matrix has three linearly independent eigenvectors (one corresponding to λ = 6 and two corresponding to λ = −3), and because the largest Jordan block is of size 4 × 4, the maximum length of a cycle of generalized eigenvectors for this matrix is 4. Case 6: J = 6 0 0 0 0 0 0 6 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 −3 1 0 0 −3 . In this case, the matrix has ﬁve linearly independent eigenvectors (four corresponding to λ = 6 and one corresponding to λ = −3), and because the largest Jordan block is of size 2 × 2, the maximum length of a cycle of generalized eigenvectors for this matrix is 2. 472 Case 7: J = 6 0 0 0 0 0 1 6 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 −3 1 0 0 −3 . In this case, the matrix has four linearly independent eigenvectors (three corresponding to λ = 6 and one corresponding to λ = −3), and because the largest Jordan block is of size 2 × 2, the maximum length of a cycle of generalized eigenvectors for this matrix is 2. Case 8: J = 6 0 0 0 0 0 1 6 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 1 0 0 6 0 0 0 −3 1 0 0 −3 . In this case, the matrix has three linearly independent eigenvectors (two corresponding to λ = 6 and one corresponding to λ = −3), and because the largest Jordan block is of size 2 × 2, the maximum length of a cycle of generalized eigenvectors for this matrix is 2. Case 9: J = 6 0 0 0 0 0 1 6 0 0 0 0 0 1 6 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 −3 1 0 0 −3 . In this case, the matrix has three linearly independent eigenvectors (two corresponding to λ = 6 and one corresponding to λ = −3), and because the largest Jordan block is of size 3 × 3, the maximum length of a cycle of generalized eigenvectors for this matrix is 3. Case 10: J = 6 0 0 0 0 0 1 6 0 0 0 0 0 1 6 0 0 0 0 0 0 0 0 0 1 0 0 6 0 0 0 −3 1 0 0 −3 . In this case, the matrix has two linearly independent eigenvectors (one corresponding to λ = 6 and one corresponding to λ = −3), and because the largest Jordan block is of size 4 × 4, the maximum length of a cycle of generalized eigenvectors for this matrix is 4. 33. There are 15 diﬀerent possible Jordan canonical forms, up to a rearrangement of Jordan blocks: 473 Case 1: J = 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 −4 0 0 0 0 −4 0 0 0 0 −4 . In this case, the matrix has seven linearly independent eigenvectors (four corresponding to λ = 2 and three corresponding to λ = −4), and because all Jordan blocks are size 1 × 1, the maximum length of a cycle of generalized eigenvectors for this matrix is 1. Case 2: J = 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 −4 0 0 0 0 −4 0 0 0 0 −4 . In this case, the matrix has six linearly independent eigenvectors (three corresponding to λ = 2 and three corresponding to λ = −4), and because the largest Jordan block is of size 2 × 2, the maximum length of a cycle of generalized eigenvectors for this matrix is 2. Case 3: J = 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 2 0 0 0 0 −4 0 0 0 0 −4 0 0 0 0 −4 . In this case, the matrix has ﬁve linearly independent eigenvectors (two corresponding to λ = 2 and three corresponding to λ = −4), and because the largest Jordan block is of size 2 × 2, the maximum length of a cycle of generalized eigenvectors for this matrix is 2. Case 4: J = 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 −4 0 0 0 0 −4 0 0 0 0 −4 . In this case, the matrix has ﬁve linearly independent eigenvectors (two corresponding to λ = 2 and three corresponding to λ = −4), and because the largest Jordan block is of size 3 × 3, the maximum length of a cycle of generalized eigenvectors for this matrix is 3. 474 Case 5: J = 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 2 0 0 0 0 −4 0 0 0 0 −4 0 0 0 0 −4 . In this case, the matrix has four linearly independent eigenvectors (one corresponding to λ = 2 and three corresponding to λ = −4), and because the largest Jordan block is of size 4 × 4, the maximum length of a cycle of generalized eigenvectors for this matrix is 4. Case 6: J = 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 −4 1 0 0 0 −4 0 0 0 0 −4 . In this case, the matrix has six linearly independent eigenvectors (four corresponding to λ = 2 and two corresponding to λ = −4), and because the largest Jordan block is of size 2 × 2, the maximum length of a cycle of generalized eigenvectors for this matrix is 2. Case 7: J = 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 −4 1 0 0 0 −4 0 0 0 0 −4 . In this case, the matrix has ﬁve linearly independent eigenvectors (three corresponding to λ = 2 and two corresponding to λ = −4), and because the largest Jordan block is of size 2 × 2, the maximum length of a cycle of generalized eigenvectors for this matrix is 2. Case 8: J = 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 2 0 0 0 0 −4 1 0 0 0 −4 0 0 0 0 −4 . In this case, the matrix has four linearly independent eigenvectors (two corresponding to λ = 2 and two corresponding to λ = −4), and because the largest Jordan block is of size 2 × 2, the maximum length of a cycle of generalized eigenvectors for this matrix is 2. 475 Case 9: J = 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 −4 1 0 0 0 −4 0 0 0 0 −4 . In this case, the matrix has four linearly independent eigenvectors (two corresponding to λ = 2 and two corresponding to λ = −4), and because the largest Jordan block is of size 3 × 3, the maximum length of a cycle of generalized eigenvectors for this matrix is 3. Case 10: J = 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 2 0 0 0 0 −4 1 0 0 0 −4 0 0 0 0 −4 . In this case, the matrix has three linearly independent eigenvectors (one corresponding to λ = 2 and two corresponding to λ = −4), and because the largest Jordan block is of size 4 × 4, the maximum length of a cycle of generalized eigenvectors for this matrix is 4. Case 11: J = 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 −4 1 0 0 0 −4 1 0 0 0 −4 . In this case, the matrix has ﬁve linearly independent eigenvectors (four corresponding to λ = 2 and one corresponding to λ = −4), and because the largest Jordan block is of size 3 × 3, the maximum length of a cycle of generalized eigenvectors for this matrix is 3. Case 12: J = 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 −4 1 0 0 0 −4 1 0 0 0 −4 . In this case, the matrix has four linearly independent eigenvectors (three corresponding to λ = 2 and one corresponding to λ = −4), and because the largest Jordan block is of size 3 × 3, the maximum length of a cycle of generalized eigenvectors for this matrix is 3. 476 Case 13: J = 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 2 0 0 0 0 −4 1 0 0 0 −4 1 0 0 0 −4 . In this case, the matrix has three linearly independent eigenvectors (two corresponding to λ = 2 and one corresponding to λ = −4), and because the largest Jordan block is of size 3 × 3, the maximum length of a cycle of generalized eigenvectors for this matrix is 3. Case 14: J = 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 −4 1 0 0 0 −4 1 0 0 0 −4 . In this case, the matrix has three linearly independent eigenvectors (two corresponding to λ = 2 and one corresponding to λ = −4), and because the largest Jordan block is of size 3 × 3, the maximum length of a cycle of generalized eigenvectors for this matrix is 3. Case 15: J = 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 2 0 0 0 0 −4 1 0 0 0 −4 1 0 0 0 −4 . In this case, the matrix has two linearly independent eigenvectors (one corresponding to λ = 2 and one corresponding to λ = −4), and because the largest Jordan block is of size 4 × 4, the maximum length of a cycle of generalized eigenvectors for this matrix is 4. 34. FALSE. For instance, if A = the matrix A − B = 0 −1 1 0 1 0 1 1 and B = 1 1 0 1 , then we have eigenvalues λA = λB = 1, but is invertible, and hence, zero is not an eigenvalue of A − B . This can also be veriﬁed directly. 11 , then A2 = B 2 = I2 , but the matrices A and B are 01 not similar. (Otherwise, there would exist an invertible matrix S such that S −1 AS = B . But since A = I2 this reduces to I2 = B , which is clearly not the case. Thus, no such invertible matrix S exists.) 35. FALSE. For instance, if A = I2 and B = 36. To see that T1 + T2 is a linear transformation, we must verify that it respects addition and scalar multiplication: 477 T1 + T2 respects addition: Let v1 and v2 belong to V . Then we have (T1 + T2 )(v1 + v2 ) = T1 (v1 + v2 ) + T2 (v1 + v2 ) = [T1 (v1 ) + T1 (v2 )] + [T2 (v1 ) + T2 (v2 )] = [T1 (v1 ) + T2 (v1 )] + [T1 (v2 ) + T2 (v2 )] = (T1 + T2 )(v1 ) + (T1 + T2 )(v2 ), where we have used the linearity of T1 and T2 individually in the second step. T1 + T2 respects scalar multiplication: Let v belong to V and let k be a scalar. Then we have (T1 + T2 )(k v) = T1 (k v) + T2 (k v) = kT1 (v) + kT2 (v) = k [T1 (v) + T2 (v)] = k (T1 + T2 )(v), as required. There is no particular relationship between Ker(T1 ), Ker(T2 ), and Ker(T1 + T2 ). 37. FALSE. For instance, consider T1 : R → R deﬁned by T1 (x) = x, and consider T2 : R → R deﬁned by T2 (x) = −x. Both T1 and T2 are linear transformations, and both of them are onto. However, (T1 + T2 )(x) = T1 (x) + T2 (x) = x + (−x) = 0, so Rng(T1 + T2 ) = {0}, which implies that T1 + T2 is not onto. 38. FALSE. For instance, consider T1 : R → R deﬁned by T1 (x) = x, and consider T2 : R → R deﬁned by T2 (x) = −x. Both T1 and T2 are linear transformations, and both of them are one-to-one. However, (T1 + T2 )(x) = T1 (x) + T2 (x) = x + (−x) = 0, so Ker(T1 + T2 ) = R, which implies that T1 + T2 is not one-to-one. 39. Assume that c1 T (v1 + c2 T (v2 ) + · · · + cn T (vn ) = 0. We wish to show that c1 = c2 = · · · = cn = 0. To do this, use the linearity of T to rewrite the above equation as T (c1 v1 + c2 v2 + · · · + cn vn ) = 0. Now, since Ker(T ) = {0}, we conclude that c1 v1 + c2 v2 + · · · + cn vn = 0. Since {v1 , v2 , . . . , vn } is a linearly independent set, we conclude that c1 = c2 = · · · = cn = 0, as required. 40. Assume that V1 ∼ V2 and V2 ∼ V3 . Then there exist isomorphisms T1 : V1 → V2 and T2 : V2 → V3 . = = Since the composition of two linear transformations is a linear transformation (Theorem 5.4.2), we have a linear transformation T2 T1 : V1 → V3 . Moreover, since both T1 and T2 are one-to-one and onto, T2 T1 is also one-to-one and onto (see Problem 39 in Section 5.4). Thus, T2 T1 : V1 → V3 is an isomorphism. Hence, V1 ∼ V3 , as required. = 41. We have Ai v = λi v 478 for each i = 1, 2, . . . , k . Thus, (A1 A2 . . . Ak )v = (A1 A2 . . . Ak−1 )(Ak v) = (A1 A2 . . . Ak−1 )(λk v) = λk (A1 A2 . . . Ak−1 )v = λk (A1 A2 . . . Ak−2 )(Ak−1 v) = λk (A1 A2 . . . Ak−2 )(λk−1 v) = λk−1 λk (A1 A2 . . . Ak−2 )v . . . = λ2 λ3 . . . λk (A1 v) = λ2 λ3 . . . λk (λ1 v) = (λ1 λ2 . . . λk )v, which shows that v is an eigenvector of A1 A2 . . . Ak with corresponding eigenvalue λ1 λ2 . . . λk . 42. We ﬁrst show that T is a linear transformation: T respects addition: Let A and B belong to Mn (R). Then T (A + B ) = S −1 (A + B )S = S −1 AS + S −1 BS = T (A) + T (B ), and so T respects addition. T respects scalar multiplication: Let A belong to Mn (R), and let k be a scalar. Then T (kA) = S −1 (kA)S = k (S −1 AS ) = kT (A), and so T respects scalar multiplication. Next, we verify that T is both one-to-one and onto (of course, in view of Proposition 5.4.13, it is only necessary to conﬁrm one of these two properties, but we will nonetheless verify them both): T is one-to-one: Assume that T (A) = 0n . That is, S −1 AS = 0n . Left multiplying by S and right multiplying by S −1 on both sides of this equation yields A = S 0n S −1 = 0n . Hence, Ker(T ) = {0n }, and so T is one-toone. T is onto: Let B be an arbitrary matrix in Mn (R). Then T (SBS −1 ) = S −1 (SBS −1 )S = (S −1 S )B (S −1 S ) = In BIn = B, and hence, B belongs to Rng(T ). Since B was an arbitrary element of Mn (R), we conclude that Rng(T ) = Mn (R). That is, T is onto. Solutions to Section 6.1 True-False Review: 1. TRUE. This is essentially the statement of Theorem 6.1.3. 2. FALSE. As stated in Theorem 6.1.5, if there is any point x0 in I such that W [y1 , y2 , . . . , yn ](x0 ) = 0, then {y1 , y2 , . . . , yn } is linearly dependent on I . 3. FALSE. Many counterexamples are possible. Note that (xD − Dx)(x) = xD(x) − D(x2 ) = x − 2x = (−1)(x). Therefore, xD − Dx = −1. Setting L1 = x and L2 = D, we therefore see that L1 L2 = L2 L1 in this example. 479 4. TRUE. By assumption, L1 (y1 + y2 ) = L1 (y1 ) + L1 (y2 ) and L1 (cy ) = cL1 (y ) for all functions y, y1 , y2 . Likewise, L2 (y1 + y2 ) = L2 (y1 ) + L2 (y2 ) and L2 (cy ) = cL2 (y ) for all functions y, y1 , y2 . Therefore, (L1 + L2 )(y1 + y2 ) = L1 (y1 + y2 ) + L2 (y1 + y2 ) = (L1 (y1 ) + L1 (y2 )) + (L2 (y1 ) + L2 (y2 )) = (L1 (y1 ) + L2 (y1 )) + (L1 (y2 ) + L2 (y2 )) = (L1 + L2 )(y1 ) + (L1 + L2 )(y2 ) and (L1 + L2 )(cy ) = L1 (cy ) + L2 (cy ) = cL1 (y ) + cL2 (y ) = c(L1 (y ) + L2 (y )) = c(L1 + L2 )(y ). Therefore, L1 + L2 is a linear diﬀerential operator. 5. TRUE. By assumption L(y1 + y2 ) = L(y1 ) + L(y2 ) and L(ky ) = kL(y ) for all scalars k . Therefore, for all constants c, we have (cL)(y1 + y2 ) = cL(y1 + y2 ) = c(L(y1 ) + L(y2 )) = cL(y1 ) + cL(y2 ) = (cL)(y1 ) + (cL)(y2 ) and (cL)(ky ) = c(L(ky )) = c(k (L(y ))) = k (cL)(y ). Therefore, cL is a linear diﬀerential operator. 6. TRUE. We have L(yp + u) = L(yp ) + L(u) = F + 0 = F. 7. TRUE. We have L(y1 + y2 ) = L(y1 ) + L(y2 ) = F1 + F2 . Problems: 1. (a) L(y (x)) = (D − x)(2x − 3e2x ) = D(2x − 3e2x ) − x(2x − 3e2x ) = (2 − 6e2x ) − 2x2 + 3xe2x = 2(1 − x2 ) + 3e2x (x − 2). (b) L(y (x)) = (D − x)(3 sin2 x) = D(3 sin2 x) − x(3 sin2 x) = 6 sin x cos x − 3x sin2 x = 3 sin x(2x cos x − x sin x). 2. 480 (a) L(y (x)) = (D2 − x2 D + x)(2x − 3e2x ) = D2 (2x − 3e2x ) − x2 D(2x − 3e2x ) + x(2x − 3e2x ) = D(2 − 6e2x ) − x2 (2 − 6e2x ) + x(2x − 3e2x ) = −12e2x − 2x2 + 6x2 e2x + 2x2 − 3xe2x = 3e2x (2x2 − x − 4). (b) L(y (x)) = (D2 − x2 D + x)(3 sin2 x) = D2 (3 sin2 x) − x2 D(3 sin2 x) + x(3 sin2 x) = D(6 sin x · cos x) − x2 (6 sin x · cos x) + 3x sin2 x = 12 cos2 x − 6 − 6x2 sin x · cos x + 3x sin2 x. 3. (a) L(y (x)) = (D3 − 2xD2 )(2x − 3e2x ) = D3 (2x − 3e2x ) − 2xD2 (2x − 3e2x ) = D3 (−3e2x ) − 2xD2 (−3e2x ) = −24e2x + 24xe2x = 24e2x (x − 1). (b) L(y (x)) = (D3 − 2xD2 )(3 sin2 x) = D3 (3 sin2 x) − 2xD2 (3 sin2 x) = D2 (6 sin x · cos x) − 2xD(6 sin x · cos x) = D(12 cos2 x − 6) − 2x(12 cos2 x − 6) = −24 sin x · cos x − 24x cos2 x + 12x. 4. (a) L(y (x)) = (D3 − D + 4)(2x − 3e2x ) = D3 (2x − 3e2x ) − D(2x − 3e2x ) + 4(2x − 3e2x ) = D2 (2 − 6e2x ) − (2 − 6e2x ) + 8x − 12e2x = D(−12e2x ) − (2 − 6e2x ) + 8x − 12e2x = (−24e2x ) − (2 − 6e2x ) + 8x − 12e2x = 8x − 30e2x − 2. (b) L(y (x)) = (D3 − D + 4)(3 sin2 x) = D3 (3 sin2 x) − D(3 sin2 x) + 4(3 sin2 x) = D2 (6 sin x · cos x) − (6 sin x · cos x) + 12 sin2 x = D(12 cos2 x − 6) − 6 sin x · cos x + 12 sin2 x = −24 sin x · cos x − 6 sin x · cos x + 12 sin2 x = 6 sin x(−3 cos x + 2 sin x). 481 5. L(y (x)) = (x2 D2 + 2xD − 2)(x−2 ) = x2 D2 (x−2 ) + 2xD(x−2 ) − 2x−2 = x2 D(−2x−3 ) − 4x−2 − 2x−2 = 6x−2 − 6x−2 = 0. Thus, f ∈ Ker(L). 6. L(y (x)) = (D2 − x−1 D + 4x2 )(sin(x2 )) = D2 (sin (x2 )) − x−1 D(sin (x2 )) + 4x2 (sin (x2 )) = D(2x cos (x2 )) − x−1 (2x cos (x2 )) + 4x2 (sin (x2 )) = 2[cos (x2 ) − 2x2 sin (x2 )] − 2 cos (x2 ) + 4x2 (sin (x2 )) = 0. Thus, f ∈ Ker(L). 7. L(y (x)) = (D3 + D2 + D + 1)(sin x + cos x) = D3 (sin x + cos x) + D2 (sin x + cos x) + D(sin x + cos x) + (sin x + cos x) = D2 (cos x − sin x) + D(cos x − sin x) + cos x − sin x + sin x + cos x = D(− sin x − cos x) + (− sin x − cos x) + 2 cos x = − cos x + sin x − sin x − cos x + 2 cos x = 0. Thus, f ∈ Ker(L). 8. L(y (x)) = (−D2 + 2D − 1)(xex ) = −D2 (xex ) + 2D(xex ) − xex = −D(ex + xex ) + 2(ex + xex ) − xex = −(ex + ex + xex ) + 2ex + 2xex − xex = 0. Thus, f ∈ Ker(L). 9. L(y (x)) = 0 ⇐⇒ (D − 2x)y = 0 ⇐⇒ y − 2xy = 0. This linear equation has integrating factor d −x2 2 y ) = 0, which has a general (e I = e− 2xdx = e−x , so that the diﬀerential equation can be written as dx −x2 x2 solution e y = c, that is, y (x) = ce . Consequently, 2 Ker(L) = {y ∈ C 1 (R) : y (x) = cex , c ∈ R}. 10. L(y (x)) = 0 ⇐⇒ (D2 +1)y = 0 ⇐⇒ y + y = 0. This is a second-order homogeneous linear diﬀerential equation, so by Theorem 6.1.3, the solution set to this equation forms a 2-dimensional vector space. A little thought shows that both y1 = cos x and y2 = sin x are solutions to the equation. Since they are linearly independent, these two functions form a basis for the solution set. Therefore, Ker(L) = {a cos x + b sin x : a, b ∈ R}. 482 11. L(y (x)) = 0 ⇐⇒ (D2 + 2D − 15)y = 0 ⇐⇒ y + 2y − 15y = 0. This is a second-order homogeneous linear diﬀerential equation, so by Theorem 6.1.3, the solution set to this equation forms a 2-dimensional vector space. Following the hint given in the text, we try for solutions of the form y (x) = erx . Substituting this into the diﬀerential equation, we get r2 erx + 2rerx − 15erx = 0, or erx (r2 + 2r − 15) = 0. It follows that r2 + 2r − 15 = 0. That is (r + 5)(r − 3) = 0, and hence, r = −5 and r = 3 are the solutions. Therefore, we obtain the solutions y1 = e−5x and y2 = e3x . Since they are linearly independent (by computing the Wronskian, for instance), these two functions form a basis for the solution set. Therefore, Ker(L) = {ae−5x + be3x : a, b ∈ R}. 1 12. Ly = 0 ⇐⇒ (x2 D + x)y = 0 ⇐⇒ x2 y + xy = 0. Dividing by x2 , we can express this as y + x y = 0. 1 1 This diﬀerential equation is separable: y dy = − x dx. Integrating both sides, we obtain ln |y | = − ln |x| + c1 . Therefore |y | = e− ln |x|+c1 = c2 e− ln |x| . Hence, y (x) = c3 e− ln |x| = |c3| . Thus, x Ker(L) = 13. We have c :c∈R . |x| (L1 L2 )(f ) = L1 (f − 2x2 f ) = (D + 1)(f − 2x2 f ) = f + f − 4xf − 2x2 f − 2x2 f = f + (1 − 2x2 )f − (4x + 2x2 )f, so that L1 L2 = D2 + (1 − 2x2 )D − 2x(2 + x). Furthermore, (L2 L1 )(f ) = L2 (f + f ) = (D − 2x2 )(f + f ) = f + f − 2x2 f − 2x2 f, so that L2 L1 = D2 + (1 − 2x2 )D − 2x2 . Therefore, L1 L2 = L2 L1 . 14. We have (L1 L2 )(f ) = L1 (f + (2x − 1)f ) = (D + x)(f + (2x − 1)f ) = f + xf + 2f + (2x − 1)f + x(2x − 1)f = (D2 + (3x − 1)D + (2x2 − x + 2))(f ), so that L1 L2 = D2 + (3x − 1)D + (2x2 − x + 2). Furthermore, (L2 L1 )(f ) = L2 (f + xf ) = (D + (2x − 1))(f + xf ) = f + (2x − 1)f + f + xf + (2x − 1)xf = (D2 + (3x − 1)D + (2x2 − x + 1))(f ), 483 so that L2 L1 = D2 + (3x − 1)D + (2x2 − x + 1). Therefore, L1 L2 = L2 L1 . 15. We have (L1 L2 )(f ) = L1 (D + b1 )f = (D + a1 )(f + b1 f ) = f + [b1 + a1 ]f + (b1 + a1 b1 )f. Thus L1 L2 = D2 + (b1 + a1 )D + (b1 + a1 b1 ). Similarly, L2 L1 = D2 + (a1 + b1 )D + (a1 + b1 a1 ). Thus L1 L2 − L2 L1 = b1 − a1 , which is the zero operator if and only if b1 = a1 which can be integrated directly to obtain b1 = a1 + c2 , where c2 is an arbitrary constant. Consequently we must have L2 = D + [a1 (x) + c2 ]. 16. (D3 + x2 D2 − (sin x)D + ex )y = x3 17. (D2 + 4xD − 6x2 )y = x2 sin x and and y + x2 y − (sin x)y + ex y = 0. y + 4xy − 6x2 y = 0. 18. x2 and ex are both continuous functions for all x ∈ R and in particular on any interval I containing x0 = 0. Thus, y (x) = 0 is clearly a solution to the given diﬀerential equation and also satisﬁes the initial conditions. Thus, by the existence-uniqueness theorem, y (x) = 0 is the only solution to the initial-value problem. 19. Let a1 , ..., an be functions that are continuous on the interval I. Then, for any x0 in I, the initial-value problem y (n) + a1 (x)y (n−1) + ... + an−1 (x)y + an (x)y = 0, y (x0 ) = 0, y (x0 ) = 0, ..., y (n−1) (x0 ) = 0, has only the trivial solution y (x) = 0. Proof: All of the conditions of the existence-uniqueness theorem are satisﬁed and y (x) = 0 is a solution; consequently, it is the only solution. 20. Given y − 2y − 3y = 0 then r2 − 2r − 3 = 0 =⇒ r ∈ {−1, 3} =⇒ y (x) = c1 e−x + c2 e3x . 21. Given y + 7y + 10y = 0 then r2 + 7r + 10 = 0 =⇒ r ∈ {−5, −2} =⇒ y (x) = c1 e−5x + c2 e−2x . 22. Given y − 36y = 0 then r2 − 36 = 0 =⇒ r ∈ {−6, 6} =⇒ y (x) = c1 e−6x + c2 e6x . 23. Given y + 4y = 0 then r2 + 4r = 0 =⇒ r ∈ {−4, 0} =⇒ y (x) = c1 e−4x + c2 . 24. Substituting y (x) = erx into the given diﬀerential equation yields erx (r3 − 3r2 − r + 3) = 0, so that we will have a solution provided that r satisﬁes r3 − 3r2 − r + 3 = 0, that is, (r − 1)(r + 1)(r − 3) = 0. Consequently, three solutions to the given diﬀerential equation are y1 (x) = ex , y2 (x) = e−x , y3 (x) = e3x . Further, the Wronskian of these solutions is W [y1 , y2 , y3 ] = ex ex ex e−x −e−x e−x e3x 3e3x 9e3x = −16e3x . 484 Since the Wronskian is never zero, the solutions are linearly independent on any interval. Hence the general solution to the diﬀerential equation is y (x) = c1 ex + c2 e−x + c3 e3x . 25. Substituting y (x) = erx into the given diﬀerential equation yields erx (r3 + 3r2 − 4r − 12) = 0, so that we will have a solution provided that r satisﬁes r3 + 3r2 − 4r − 12 = 0, that is, (r − 2)(r + 2)(r + 3) = 0. Consequently, three solutions to the given diﬀerential equation are y1 (x) = e2x , y2 (x) = e−2x , y3 (x) = e−3x . Further, the Wronskian of these solutions is W [y1 , y2 , y3 ] = e2x 2e2x 4e2x e−2x −2e−2x 4e−2x e−3x −3e−3x 9e−3x = −20e−3x . Since the Wronskian is never zero, the solutions are linearly independent on any interval. Hence the general solution to the diﬀerential equation is y (x) = c1 e2x + c2 e−2x + c3 e−3x . 26. Substituting y (x) = erx into the given diﬀerential equation yields erx (r3 + 3r2 − 18r − 40) = 0, so that we will have a solution provided r satisﬁes r3 + 3r2 − 18r − 40 = 0, that is, (r + 5)(r + 2)(r − 4) = 0. Consequently, three solutions to the diﬀerential equation are y1 (x) = e−5x , y2 (x) = e−2x , y3 (x) = e4x . Further, the Wronskian of the solution is W [y1 , y2 , y3 ] = e−5x −5e−5x 25e−5x e−2x −2e−2x 4e−2x e4x 4e4x 16e4x = 162e−3x . Since the Wronskian is never zero, the solutions are linearly independent on any interval. Hence the general solution to the diﬀerential equation is y (x) = c1 e−5x + c2 e−2x + c3 e4x . 27. Given y − y − 2y = 0 then r3 − r2 − 2r = 0 =⇒ r(r2 − r − 2) = 0 =⇒ r ∈ {−1, 0, 2} =⇒ y (x) = c1 e−x + c2 + c3 e2x . 28. Given y + y − 10y + 8y = 0 then r3 + r2 − 10r + 8 = 0 =⇒ (r − 2)(r − 1)(r + 4) = 0 =⇒ r ∈ {−4, 1, 2} =⇒ y (x) = c1 e−4x + c2 ex + c3 e2x . 29. Given y (iv) − 2y − y + 2y = 0 then r4 − 2r3 − r2 − 2r = 0 =⇒ r(r3 − 2r2 − r − 2) = 0 =⇒ r(r − 2)(r − 1)(r + 1) = 0 =⇒ r ∈ {−1, 0, 1, 2} =⇒ y (x) = c1 e−x + c2 + c3 ex + c4 e2x . 30. Given y (iv) − 13y + 36y = 0 then r4 − 13r2 + 36 = 0 =⇒ (r2 − 9)(r2 − 4) = 0 =⇒ r2 ∈ {4, 9} =⇒ r ∈ {−3, −2, 2, 3} =⇒ y (x) = c1 e−3x + c2 e−2x + c3 e2x + c4 e3x . 31. Given x2 y + 3xy − 8y = 0, the trial solution gives x2 r(r − 1)xr−2 + 3xrxr−1 − 8xr = 0, or xr [r(r − 1) + 3r − 8] = 0. Therefore, r2 + 2r − 8 = 0, which factors as (r + 4)(r − 2) = 0. Therefore, r = −4 or r = 2. Hence, we obtain the solutions y1 (x) = x−4 and y2 (x) = x2 . Furthermore, W [x−4 , x2 ] = (x−4 )(2x) − (−4x−5 )(x2 ) = 6x−3 = 0, so that {x−4 , x2 } is a linearly independent set of solutions to the given diﬀerential equation on (0, ∞). Consequently, from Theorem 6.1.3, the general solution is given by y (x) = c1 x−4 + c2 x2 . 32. Given 2x2 y + 5xy + y = 0, the trial solution gives 2x2 r(r − 1)xr−2 + 5xrxr−1 + xr = 0, or xr [2r(r − 1) + 5r + 1] = 0. Therefore, 2r2 + 3r + 1 = 0, which factors as (2r + 1)(r + 1) = 0. Therefore, 485 1 r = − 1 and r = −1. Hence, we obtain the solutions y1 (x) = x− 2 and y2 (x) = x−1 . Furthermore, 2 1 1 W [x− 2 , x−1 ] = (x− 2 )(− 1 13 15 ) − (− x− 2 )(x−1 ) = − x− 2 = 0, 2 x 2 2 1 so that {x− 2 , x−1 } is a linearly independent set of solutions to the given diﬀerential equation on (0, ∞). 1 Consequently, from Theorem 6.1.3, the general solution is given by y (x) = c1 x− 2 + c2 x−1 . 33. Substituting y (x) = xr into the given diﬀerential equation yields xr [r(r − 1)(r − 2)+ r(r − 1) − 2r +2] = 0, so that r must satisfy (r − 1)(r − 2)(r + 1) = 0. If follows that three solutions to the diﬀerential equation are y1 (x) = x, y2 (x) = x2 , y3 (x) = x−1 . Further, the Wronskian of these solutions is W [y1 , y2 , y3 ] = x x2 x−1 1 2x −x−2 02 2x−3 = 6x−1 . Since the Wronskian is nonzero on (0, ∞), the solutions are linearly independent on this interval. Consequently, the general solution to the diﬀerential equation is y (x) = c1 x + c2 x2 + c3 x−1 . 34. Given x3 y + 3x2 y − 6xy = 0, the trial solution gives x3 r(r − 1)(r − 2)xr−3 + 3x2 r(r − 1)xr−2 − 6xrxr−1 = 0, or xr [r(r − 1)(r − 2) + 3r(r − 1) − 6r] = 0. Therefore, r(r − 1)(r − 2) + 3r(r − 1) − 6r = 0, or r[(r − 1)(r − 2) + 3(r − 1) − 6] = 0. Therefore, r[r2 − 7] = 0, √ √ √ so that r = 0 or r = ± 7. Hence, we obtain the solutions y1 (x) = 1, y2 (x) = x 7 , and y3 (x) = x− 7 . Furthermore, √ √ 7 x− √ 1 x √7 √ √ √ √ − 7−1 7 −7 7−1 W [1, x , x= 0 7x − 7x √ √ √√ √ √ 7−2 0 7( 7 − 1)x − 7(− 7 − 1)x− 7−2 √ √ = −7(− 7 − 1)x−3 + 7( 7 − 1)x−3 √ = 14 7x−3 = 0, √ √ so that {1, x 7 , x− 7 } is a linearly independent set of solutions to the given diﬀerential equation on (0, ∞). √ √ Consequently, from Theorem 6.1.3, the general solution is given by y (x) = c1 + c2 x 7 + c3 x− 7 . 35. To determine a particular solution of the form yp (x) = A0 e3x , we substitute this solution into the diﬀerential equation: (A0 e3x ) + (A0 e3x ) − 6(A0 e3x ) = 18e3x . Therefore 9A0 e3x + 3A0 e3x − 6A0 e3x = 18e3x , which forces A0 = 3. Therefore, yp (x) = 3e3x . To obtain the general solution, we need to ﬁnd the complementary function yc (x), the solution to the associated homogeneous diﬀerential equation: y + y − 6y = 0. Seeking solutions of the form y (x) = erx , we obtain r2 erx + rerx − 6erx = 0, or erx (r2 + r − 6) = 0. Therefore, r2 + r − 6 = 0, which factors as 486 (r + 3)(r − 2) = 0. Hence, r = −3 or r = 2. Therefore, we obtain the solutions y1 (x) = e−3x and y2 (x) = e2x . Since W [e−3x , e2x ] = (e−3x )(2e2x ) − (−3e−3x )(e2x ) = 2e−x + 3e−x = 5e−x = 0, {e−3x , e2x } is linearly independent. Therefore, the complementary function is yc (x) = c1 e−3x + c2 e2x . By Theorem 6.1.7, the general solution to the diﬀerential equation is y (x) = c1 e−3x + c2 e2x + 3e3x . 36. Substituting y (x) = A0 + A1 x + A2 x2 into the diﬀerential equation yields (A0 + A1 x + A2 x2 ) + (A0 + A1 x + A2 x2 ) − 2(A0 + A1 x + A2 x2 ) = 4x2 or (2A2 + A1 − 2A0 ) + (2A2 − 2A1 )x − 2A2 x2 = 4x2 . Equating the powers of x on each side of this equation yields 2A2 + A1 − 2A0 = 0, 2A2 − 2A1 = 0, −2A2 = 4. Solving for A0 , A1 , and A2 , we ﬁnd that A0 = −3, A1 = −2, and A2 = −2. Thus, yp (x) = −3 − 2x − 2x2 . To obtain the general solution, we need to ﬁnd the complementary function yc (x), the solution to the associated homogeneous diﬀerential equation: y + y − 2y = 0. Seeking solutions of the form y (x) = erx , we obtain r2 erx + rerx − 2erx = 0, or erx (r2 + r − 2) = 0. Therefore, r2 + r − 2 = 0, which factors as (r + 2)(r − 1) = 0. Hence, r = −2 or r = 1. Therefore, we obtain the solutions y1 (x) = e−2x and y2 (x) = ex . Since W [e−2x , ex ] = (e−2x )(ex ) − (−2e−2x )(ex ) = 3e−x = 0, {e−2x , ex } is linearly independent. Therefore, the complementary function is yc (x) = c1 e−2x + c2 ex . By Theorem 6.1.7, the general solution to the diﬀerential equation is y (x) = c1 e−2x + c2 ex − 3 − 2x − 2x2 . 37. Substituting y (x) = A0 e3x into the given diﬀerential equation yields A0 e3x (27 + 18 − 3 − 2) = 4e3x , so 1 3x 1 3x e . Hence, a particular solution to the diﬀerential equation is yp (x) = e . To determine that A0 = 10 10 the general solution we need to solve the associated homogeneous diﬀerential equation y + 2y − y − 2y = 0. We try for solutions of the form y (x) = erx . Substituting into the homogeneous diﬀerential equation gives erx (r3 +2r2 − r − 2) = 0, so that we choose r to satisfy r3 +2r2 − r − 2 = 0, or equivalently, (r +2)(r − 1)(r +1) = 0. It follows that three solutions to the diﬀerential equation are y1 (x) = e−2x , y2 (x) = ex , y3 (x) = e−x . Further, the Wronskian of these solutions is W [y1 , y2 , y3 ] = e−2x −2e−2x 4e−2x ex ex ex e−x −e−x e−x = −6e−2x . Since the Wronskian is nonzero, the solutions are linearly independent on any interval. It follows that the complementary function for the given nonhomogeneous diﬀerential equation is yc (x) = c1 e−2x + c2 ex + c3 e−x , 1 so the general solution to the diﬀerential equation is y (x) = yc (x) + yp (x) = c1 e−2x + c2 ex + c3 e−x + e3x . 10 487 38. Substituting y (x) = A0 e−2x into the diﬀerential equation yields −8A0 e−2x + 4A0 e−2x + 20A0 e−2x + 8A0 e−2x = e−2x . 1 1 −2x . Thus, yp (x) = e . 24 24 To obtain the general solution, we need to ﬁnd the complementary function yc (x), the solution to the associated homogeneous diﬀerential equation: y + y − 10y + 8y = 0. Seeking solutions of the form y (x) = erx , we obtain r3 erx + r2 erx − 10rerx + 8erx = 0, or erx (r3 + r2 − 10r + 8) = 0. Therefore, r3 + r2 − 10r + 8 = 0, which has roots r = 1, r = 2, and r = −4. Therefore, we obtain the solutions y1 (x) = ex , y2 (x) = e2x , and y3 (x) = e−4x . Since Therefore, 24A0 = 1. Hence, A0 = x 2x −4x W [e , e , e ex = ex ex e2x 2e2x 4e2x e−4x −4e−4x = 30e−x = 0, 16e−4x {ex , e2x , e−4x } is linearly independent. Thus, the complementary function is yc (x) = c1 ex + c2 e2x + c3 e−4x . By Theorem 6.1.7, the general solution to the diﬀerential equation is y (x) = c1 ex + c2 e2x + c3 e−4x + 1 −2x e . 24 39. Substituting y (x) = A0 e4x into the diﬀerential equation yields 64A0 e4x + 80A0 e4x + 24A0 e4x = −3e4x . 1 1 . Thus, yp (x) = − e4x . 56 56 To obtain the general solution, we need to ﬁnd the complementary function yc (x), the solution to the associated homogeneous diﬀerential equation: y + 5y + 6y = 0. Seeking solutions of the form y (x) = erx , we obtain r3 erx + 5r2 erx + 6rerx = 0, or erx (r3 + 5r2 + 6r) = 0. Therefore, r3 + 5r2 + 6r = 0, which has roots r = 0, r = −2, and r = −3. Therefore, we obtain the solutions y1 (x) = 1, y2 (x) = e−2x , and y3 (x) = e−3x . Since 1 e−2x e−3x −2x −3x −2x −3e−3x = −6e−5x = 0, W [1, e ,e = 0 −2e −2x 0 4e 9e−3x Therefore, 168A0 = −3. Hence, A0 = − {1, e−2x , e−3x } is linearly independent. Therefore, the complementary function is yc (x) = c1 +c2 e−2x +c3 e−3x . By Theorem 6.1.7, the general solution to the diﬀerential equation is y (x) = c1 + c2 e−2x + c3 e−3x − 1 4x e. 56 40. Prior to the statement of Theorem 6.1.3 it was shown that the set of all solutions forms a vector space. We now show that th dimension of this solution space is n, by constructing a basis. Let y1 , y2 , ..., yn be the unique solutions of the n initial-value problems: ( n) a0 (x)yi (n−1) + a1 (x)yi (k−1) yi + ... + an−1 (x)yi + an (x)yi = 0, (x0 ) = δik , k = 1, 2, ..., n, 488 respectively. The Wronskian of these functions at x0 is W [y1 , y2 , ..., yn ](x0 ) = det[In ] = 1 = 0 so that the solutions are linearly independent on I. We now show that they span the solution space. Let u(x) be any solution of the diﬀerential equation on I, and suppose that u(x0 ) = u1 , u (x0 ) = u2 , ..., u(n−1) (x0 ) = un , where u1 , u2 , ..., un are constants. It follows that y = u(x) is the unique solution to the initial-value problem: Ly = 0, y (x0 ) = u1 , y (x0 ) = u2 , ..., y (n−1) (x0 ) = un . However, if we deﬁne w(x) = u1 y1 (x) + u2 y2 (x) + ... + un yn (x), then w(x) also satisﬁes this initial value problem . Thus, by uniqueness, we must have u(x) = w(x), that is, u(x) = u1 y1 (x) + u2 y2 (x) + ... + un yn (x). Thus we have shown that {y1 , y2 , ..., yn } forms a basis for the solution space and hence the dimension of this solution space is n. 41. Consider the linear system c1 y1 (x0 ) + c2 y2 (x0 ) + ... + cn yn (x0 ) c1 y1 (x0 ) + c2 y2 (x0 ) + ... + cn yn (x0 ) . . . (n−1) c1 y1 (n−1) (x0 ) + c2 y2 (n−1) (x0 ) + ... + cn yn =0 =0 (x0 ) = 0, where we are solving for c1 , c2 , ..., cn . The determinant of the matrix of coeﬃcients of this system is W [y1 , y2 , ..., yn ](x0 ) = 0, so that the system has non-trivial solutions. Let (α1 , α2 , ..., αn ) be one such non-trivial solution for (c1 , c2 , . . . , cn ). Therefore, not all of the αi are zero. Deﬁne the function u(x) by u(x) = α1 y1 (x) + α2 y2 (x) + ... + αn yn (x). It follows that y = u(x) satisﬁes the initial-value problem: a0 y (n) + a1 y (n−1) + ... + an−1 y + an y = 0 y (x0 ) = 0, y (x0 ) = 0, ..., y (n−1) (x0 ) = 0. However, y (x) = 0 also satisﬁes the above initial value problem and hence, by uniqueness, we must have u(x) = 0, that is, α1 y1 (x) + α2 y2 (x) + ... + αn yn (x) = 0, where not all of the αi are zero. Thus, the functions y1 , y2 , ..., yn are linearly dependent on I. 42. Let vp be a particular solution to the equation (6.1.14), and let v be any solution to this equation. Then T (vp ) = w and T (v) = w. Subtracting these two equations gives T (v) − T (vp ) = 0, or equivalently, since T is a linear transformation, T (v − vp ) = 0. But this latter equation implies that the vector v − vp is an element of Ker(T ) and therefore can be written as v − vp = c1 v1 + c2 v2 + ... + cn vn , for some scalars c1 , c2 , . . . , cn . Hence, v = c1 v1 + c2 v2 + ... + cn vn + vp . 489 43. Let y1 and y2 belong to C n (I ), and let c be a scalar. We must show that L(y1 + y2 ) = L(y1 ) + L(y2 ) and L(cy1 ) = cL(y1 ). We have L(y1 + y2 ) = (Dn + a1 Dn−1 + · · · + an−1 D + an )(y1 + y2 ) = Dn (y1 + y2 ) + a1 Dn−1 (y1 + y2 ) + · · · + an−1 D(y1 + y2 ) + an (y1 + y2 ) = (Dn y1 + a1 Dn−1 y1 + · · · + an−1 Dy1 + an y1 ) + (Dn y2 + a1 Dn−1 y2 + · · · + an−1 Dy2 + an y2 ) = (Dn + a1 Dn−1 + · · · + an−1 D + an )(y1 ) + (Dn + a1 Dn−1 + · · · + an−1 D + an )(y2 ) = L(y1 ) + L(y2 ), and L(cy1 ) = (Dn + a1 Dn−1 + · · · + an−1 D + an )(cy1 ) = Dn (cy1 ) + a1 Dn−1 (cy1 ) + · · · + an−1 D(cy1 ) + an (cy1 ) = cDn (y1 ) + ca1 Dn−1 (y1 ) + · · · + can−1 D(y1 ) + can (y1 ) = c(Dn (y1 ) + a1 Dn−1 (y1 ) + · · · + an−1 D(y1 ) + an (y1 )) = c(Dn + a1 Dn−1 + · · · + an−1 D + an )(y1 ) = cL(y1 ). Solutions to Section 6.2 True-False Review: 1. FALSE. Even if the auxiliary polynomial fails to have n distinct roots, the diﬀerential equation still has n linearly independent solutions. For example, if L = D2 + 2D + 1, then the diﬀerential equation Ly = 0 has auxiliary polynomial with (repeated) roots r = −1, −1. Yet we do have two linearly independent solutions y1 (x) = e−x and y2 (x) = xe−x to the diﬀerential equation. 2. FALSE. Theorem 6.2.1 only applies to polynomial diﬀerential operators. However, in general, many counterexamples can be given. For example, note that (xD − Dx)(x) = xD(x) − D(x2 ) = x − 2x = (−1)(x). Therefore, xD − Dx = −1. Setting L1 = x and L2 = D, we therefore see that L1 L2 = L2 L1 in this example. 3. TRUE. This is really just the statement that a polynomial of degree n always has n roots, with multiplicities counted. 4. TRUE. Since 0 is a root of multiplicity four, each term of the polynomial diﬀerential operator must contain a factor of D4 , so that any polynomial of degree three or less becomes zero after taking four (or more) derivatives. Therefore, for a homogeneous diﬀerential equation of this type, a polynomial of degree three or less must be a solution. 5. FALSE. Note that r = 0 is a root of the auxiliary polynomial, but only of multiplicity 1. The expression c1 + c2 x in the solution reﬂects r = 0 as a root of multiplicity 2. 6. TRUE. The roots of the auxiliary polynomial are r = −3, −3, 5i, −5i. The portion of the solution corresponding to the repeated root r = −3 is c1 e−3x + c2 xe−3x , and the portion of the solution corresponding to the complex conjugate pair r = ±5i is c3 cos 5x + c4 sin 5x. 490 7. TRUE. The roots of the auxiliary polynomial are r = 2 ± i, 2 ± i. The terms corresponding to the ﬁrst pair 2 ± i are c1 e2x cos x and c2 e2x sin x, and the repeated root gives two more terms: c3 xe2x cos x and c4 xe2x sin x. 8. FALSE. Many counterexamples can be given. For instance, if P (D) = D − 1, then the general solution is y (x) = cex . However, the diﬀerential equation (P (D))2 y = (D − 1)2 y has auxiliary equation with roots r = 1, 1 and general solution z (x) = c1 ex + c2 xex = xy (x). Problems: 1. The auxiliary polynomial is P (r) = r2 + 2r − 3 = (r + 3)(r − 1). Therefore, the auxiliary equation has roots r = −3 and r = 1. Therefore, two linearly independent solutions to the given diﬀerential equation are y1 (x) = e−3x and y2 (x) = ex . By Theorem 6.1.3, the solution space to this diﬀerential equation is 2-dimensional, and hence {e−3x , ex } forms a basis for the solution space. 2. The auxiliary polynomial is P (r) = r2 + 6r + 9 = (r + 3)2 . Therefore, the auxiliary equation has roots r = −3 (with multiplicity 2). Therefore, two linearly independent solutions to the given diﬀerential equation are y1 (x) = e−3x and y2 (x) = xe−3x . By Theorem 6.1.3, the solution space to this diﬀerential equation is 2-dimensional, and hence {e−3x , xe−3x } forms a basis for the solution space. 3. The auxiliary polynomial is P (r) = r2 − 6r + 25. According to the quadratic equation, the auxiliary equation has roots r = 3 ± 4i. Therefore, two linearly independent solutions to the given diﬀerential equation are y1 (x) = e3x cos 4x and y2 (x) = e3x sin 4x. By Theorem 6.1.3, the solution space to this diﬀerential equation is 2-dimensional, and hence {e3x cos 4x, e3x sin 4x} forms a basis for the solution space. 4. The general vector in S takes the form y (x) = c(sin 4x + 5 cos 4x). The full solution space for this diﬀerential equation is 2-dimensional by Theorem 6.1.3. Note that sin 4x belongs to the solution space and is linearly independent from sin 4x + 5 cos 4x since W [sin 4x + 5 cos 4x, sin 4x] = 20 = 0. Therefore, we can extend the basis for S to a basis for the entire solution space with {sin 4x +5 cos 4x, sin 4x}. Many other extensions are also possible, of course. 5. We have r2 − r − 2 = 0 =⇒ r ∈ {−1, 2} =⇒ y (x) = c1 e−x + c2 e2x . 6. We have r2 − 6r + 9 = 0 =⇒ r ∈ {3, 3} =⇒ y (x) = c1 e3x + c2 xe3x . 7. We have r2 + 6r + 25 = 0 =⇒ r ∈ {−3 − 4i, −3 + 4i} =⇒ y (x) = c1 e−3x cos 4x + c2 e−3x sin 4x. 8. We have (r + 1)(r − 5) = 0 =⇒ r ∈ {−1, 5} =⇒ y (x) = c1 e−x + c2 e5x . 9. We have (r + 2)2 = 0 =⇒ r ∈ {−2, −2} =⇒ y (x) = c1 e−2x + c2 xe−2x . 491 10. We have r2 − 6r + 34 = 0 =⇒ r ∈ {3 − 5i, 3 + 5i} =⇒ y (x) = c1 e3x cos 5x + c2 e3x sin 5x. 11. We have r2 + 10r + 25 = 0 =⇒ r ∈ {−5, −5} =⇒ y (x) = c1 e−5x + c2 xe−5x . √ √ √√ 12. We have r2 − 2 = 0 =⇒ r ∈ {− 2, 2} =⇒ y (x) = c1 e− 2x + c2 e 2x . 13. We have r2 + 8r + 20 = 0 =⇒ r ∈ {−4 − 2i, −4 + 2i} =⇒ y (x) = c1 e−4x cos 2x + c2 e−4x sin 2x. 14. We have r2 + 2r + 2 = 0 =⇒ r ∈ {−1 − i, −1 + i} =⇒ y (x) = c1 e−x cos x + c2 e−x sin x. 15. We have (r − 4)(r + 2) = 0 =⇒ r ∈ {−2, 4} =⇒ y (x) = c1 e−2x + c2 e4x . 16. We have r2 − 14r + 58 = 0 =⇒ r ∈ {7 − 3i, 7 + 3i} =⇒ y (x) = c1 e7x cos 3x + c2 e7x sin 3x. 17. We have r3 − r2 + r − 1 = 0 =⇒ r ∈ {1, i, −i} =⇒ y (x) = c1 ex + c2 cos x + c3 sin x. 18. We have r3 − 2r2 − 4r + 8 = 0 =⇒ r ∈ {−2, 2, 2} =⇒ y (x) = c1 e−2x + (c2 + c3 x)e2x . 19. We have (r − 2)(r2 − 16) = 0 =⇒ r ∈ {2, 4, −4} =⇒ y (x) = c1 e2x + c2 e4x + c3 e−4x . 20. We have (r2 + 2r + 10)2 = 0 =⇒ r ∈ {−1 + 3i, −1 + 3i, −1 − 3i, −1 − 3i} =⇒ y (x) = e−x [c1 cos 3x + c2 sin 3x + x(c4 cos 3x + c4 sin 3x)]. 21. We have (r2 + 4)2 (r + 1) = 0 =⇒ r ∈ {2i, 2i, −2i, −2i, −1} =⇒ y (x) = c1 e−x + c2 cos 2x + c3 sin 2x + x(c4 cos 2x + c5 sin 2x). √√ 22. We have (r2 + 3)(r + 1) = 0√ ⇒ r ∈ {−1, −1, −i 3, i 3} =⇒ = √ y (x) = c1 e−x + c2 xe−x + c3 cos ( 3x) + c4 sin ( 3x). 23. We have r2 (r − 1) = 0 =⇒ r ∈ {0, 0, 1} =⇒ y (x) = c1 + c2 x + c3 ex . 24. We have r4 − 8r2 + 16 = (r − 2)2 (r + 2)2 = 0 =⇒ r ∈ {2, 2, −2, −2} =⇒ y (x) = e2x (c1 + c2 x) + e−2x (c3 + c4 x). 25. We have r4 − 16 = 0 =⇒ r ∈ {2, −2, 2i, −2i} =⇒ y (x) = c1 e2x + c2 e−2x + c3 cos 2x + c4 sin 2x. 26. We have r3 +8r2 +22r +20 = 0 =⇒ r ∈ {−2, −3+ i, −3 − i} =⇒ y (x) = c1 e−2x + e−3x (c2 cos x + c3 sin x). 27. We have r4 − 16r2 + 40r − 25 = 0 =⇒ r ∈ {1, −5, 2 + i, 2 − i} =⇒ y (x) = c1 ex + c2 e−5x + e2x (c3 cos x + c4 sin x). 28. We have (r − 1)3 (r2 +9) = 0 =⇒ r ∈ {1, 1, 1, −3i, 3i} =⇒ y (x) = ex (c1 + c2 x + c3 x2 )+ c4 cos 3x + c5 sin 3x. 29. We have (r2 − 2r + 2)2 (r2 − 1) = 0 =⇒ r ∈ {1 + i, 1 + i, 1 − i, 1 − i, 1, −1} =⇒ y (x) = ex (c1 cos x + c2 sin x) + xex (c3 cos x + c4 sin x) + c5 e−x + c6 ex . 30. We have (r + 3)(r − 1)(r + 5)3 = 0 =⇒ r ∈ {−3, 1, −5, −5, −5} =⇒ y (x) = c1 e−3x + c2 ex + e−5x (c3 + c4 x + c5 x2 ). 31. We have (r2 + 9)3 = 0 =⇒ r ∈ {3i, 3i, 3i, −3i, −3i, −3i} =⇒ y (x) = c1 cos 3x + c2 sin 3x + x(c3 cos 3x + c4 sin 3x) + x2 (c5 cos 3x + c6 sin 3x). 492 32. We have r2 − 8r + 16 = 0 =⇒ r ∈ {4, 4} =⇒ y (x) = c1 e4x + c2 xe4x . Now 2 = y (0) = c1 and 7 = y (0) = 4c1 + c2 . Therefore, c1 = 2 and c2 = −1 Hence, the solution to this initial-value problem is y (x) = 2e4x − xe4x . 33. We have r2 − 4r + 5 = 0 =⇒ r ∈ {2 + i, 2 − i} =⇒ y (x) = c1 e2x cos x + c2 e2x sin x. Now 3 = y (0) = c1 and 5 = y (0) = 2c1 + c2 . Therefore, c1 = 3 and c2 = −1. Hence, the solution to this initial-value problem is y (x) = 3e2x cos x − e2x sin x. 34. We have r3 − r2 + r − 1 = 0 =⇒ r ∈ {1, i, −i} =⇒ y (x) = c1 ex + c2 cos x + c3 sin x. Then since y (0) = 0, we have c1 + c2 = 0. Moreover, since y (0) = 1, we have c1 + c3 = 1. Finally, y (0) = 2 implies that c1 − c2 = 2. Solving these equations yields c1 = 1, c2 = −1, c3 = 0. Hence, the solution to this initial-value problem is y (x) = ex − cos x. 35. We have r3 + 2r2 − 4r − 8 = (r − 2)(r + 2)2 = 0 =⇒ r ∈ {2, −2, −2} =⇒ y (x) = c1 e2x + c2 e−2x + c3 xe−2x . Then since y (0) = 0, we have c1 + c2 = 0. Moreover, since y (0) = 6, we have 2c1 − 2c2 + c3 = 6. Further, y (0) = 8 implies that 4c1 + 4c2 − 4c3 = 8. Solving these equations yields c1 = 2, c2 = c3 = −2. Hence, the solution to this initial-value problem is y (x) = 2(e2x − e−2x − xe−2x ). 36. Given m > 0 and k > 0, we have auxiliary polynomial P (r) = r2 − 2mr + (m2 + k 2 ) = 0 =⇒ r = m ± ki. Therefore, y (x) = emx (c1 cos kx + c2 sin kx). Diﬀerentiating, we obtain y (x) = memx (c1 cos kx + c2 sin kx) + emx (−c1 k sin kx + c2 k cos kx). Now 0 = y (0) = c1 and k = y (0) = c2 k . Therefore, c1 = 0 and c2 = 1. Hence, the solution to this initial-value problem is y (x) = emx sin kx. 37. Given m > 0 and k > 0, we have auxiliary polynomial P (r) = r2 − 2mr + (m2 − k 2 ) = 0 =⇒ r = m ± k . Therefore, the general solution to this diﬀerential equation is y (x) = ae(m+k)x +be(m−k)x = emx (aekx +be−kx ). Letting a = c1 +c2 and b = c1 −c2 in the last equality gives 2 2 y (x) = emx c1 + c2 kx c1 − c2 −kx e+ e 2 2 = emx c1 ekx + e−kx ekx − e−kx + c2 2 2 = emx (c1 cosh kx + c2 sinh kx). 38. The auxiliary polynomial is P (r) = r2 + 2cr + k 2 . The quadratic formula supplies the roots of P (r) = 0: √ r = −c ± c2 − k 2 . (a) If c2 < k 2 , then c2 − k 2 < 0 and the roots above are complex. We can write r = −c ± ωi, where √ ω = k 2 − c2 . Thus, y (t) = e−ct (c1 cos ωt + c2 sin ωt) and y (t) = e−ct [(ωc2 − cc1 ) cos ωt − (cc2 + ωc1 ) sin ωt] . Using the initial conditions y (0) = y0 and y (0) = 0, we ﬁnd that c1 = y0 and ωc2 − y0 c = 0, or c2 = Therefore, y0 y (t) = e−ct (ω cos ωt + c sin ωt). ω y0 c ω. 493 √ √ (b) Since ω = k 2 − c2 , we ﬁnd k = ω 2 + c2 (note k is assumed positive, so we take the positive square root). Therefore, continuing from part (a), we have y0 −ct e (ω cos ωt + c sin ωt) ω ω c y0 = k e−ct cos ωt + sin ωt ω k k y0 −ct c ω √ =k e cos ωt + √ sin ωt . ω ω 2 + c2 ω 2 + c2 y (t) = Now since √ ω 2 + c2 ω there exists φ such that cos φ = √ that is, φ = tan−1 ω c c + c2 ω2 2 + √ and c 2 + c2 ω 2 = 1, sin φ = √ ω ; + c2 ω2 . Hence, y (t) can be written as y (t) = ky0 −ct e (sin φ cos ωt + cos φ sin ωt) = ω k y0 ω e−ct sin(ωt + φ). k y0 e−ct . Since this tends to zero as t → ∞, the vibrations ω tend to die out as time goes on. The damping is exponential and the motion is reasonable. The amplitude is not constant, but is given by 39. Let u(x, y ) = ex/α f (ξ ), where ξ = βx − αy , and assume that α > 0 and β > 0. (a) Let us compute the partial derivatives of u: ∂u 1 df = ex/α + ex/α f ∂x dx α df ∂ξ 1 = ex/α + ex/α f dξ ∂x α 1 x/α df =e β + ex/α f dξ α and ∂2u d2 f ∂ξ β df 1 df ∂ξ 1 x/α = ex/α 2 β + ex/α β + ex/α + ef 2 ∂x dξ ∂x α dξ α dξ ∂x α2 df f d2 f β + ex/α 2 . = β 2 ex/α 2 + 2 ex/α dξ α dξ α Moreover, ∂f ∂u = ex/α ∂y ∂y df dξ = ex/α dξ dy df = ex/α (−α) dξ x/α df = −αe dξ 494 and ∂ df ∂2u = −αex/α 2 ∂y ∂y dξ d2 f ∂ξ = −αex/α 2 dξ ∂y d2 f = −αex/α 2 (−α) dξ d2 f = α2 ex/α 2 . dξ From the formulas for ∂2u ∂2u and , we have 2 ∂x ∂y 2 ∂2u ∂2u β df f d2 f d2 f + 2 = ex/α β 2 2 + 2 + 2 + α2 2 ∂x2 ∂y dξ α dξ α dξ Now if . ∂2u ∂2u + 2 = 0, then the last equation becomes ∂x2 ∂y β2 d2 f β df f d2 f +2 + 2 + α2 2 = 0 2 dξ α dξ α dξ or (α2 + β 2 ) d2 f β df f +2 + 2 = 0. dξ 2 α dξ α Therefore, 2β df f d2 f + +22 = 0. dξ 2 α(α2 + β 2 ) dξ α (α + β 2 ) Letting p = β 1 d2 f df q and q = 2 , the last equation reduces to + 2p + 2 f = 0. 2 + β2) 2 2 α(α α +β dξ dξ α q (b) The auxiliary equation associated with Equation (6.2.10) is r2 + 2pr + 2 = 0. The quadratic formula α yields the roots q r = −p ± p2 − 2 . α Therefore, r=− β2 α2 + β 2 β ± −22 α(α2 + β 2 ) α2 (α2 + β 2 )2 α (α + β 2 )2 1/2 . Hence, r= −β ± iα = −p ± iq. α(α2 + β 2 ) Therefore, f (ξ ) = e−pξ [A sin qξ + B cos qξ ] . Since u(x, y ) = ex/α f (ξ ), we conclude that u(x, y ) = ex/α e−pξ (A sin qξ + B cos qξ ) = ex/α−pξ (A sin qξ + B cos qξ ). 495 40. The auxiliary equation is r2 + a1 r + a2 = 0. (a) Depending on whether r1 = r2 , we know that the solution to the diﬀerential equation takes one of these two forms: y (x) = c1 er1 x + c2 er2 x or y (x) = er1 x (c1 + c2 x). In order for lim y (x) = 0, we must have that the roots are negative: r1 < 0 and r2 < 0. x→+∞ (b) Complex roots guarantee a solution of the form y (x) = eax (c1 cos bx + c2 sin bx). In order for lim y (x) = x→+∞ 0, we must have a < 0. Therefore, the complex conjugate roots must have negative real part. (c) By the quadratic formula, the roots of the auxiliary equation are r= −a1 ± a2 − 4a2 1 . 2 Case 1: If a2 − 4a2 = 0, then r = − a1 < 0 is a double root and y (x) = erx (c1 + c2 x) goes to zero as x → ∞ 1 2 (see part (a)). √ −a1 ± a2 −4a2 1 Case 2: If a2 − 4a2 > 0, then both roots are negative; thus, the solution y (x) = c1 er1 x + c2 er2 x 1 2 goes to zero as x → ∞ (see part (a)). Case 3: If a2 − 4a2 < 0, then the roots of the auxiliary polynomial are complex conjugates with negative real 1 part. Therefore, the conclusion follows from part (b). (d) In this case, the auxiliary equation is r2 + a1 r = 0, with roots r = 0 and r = −a1 . The general solution to the diﬀerential equation in this case is y (x) = c1 + c2 e−a1 x . Thus, lim y (x) = c1 , a constant. x→∞ √ √ (e) In this case, the auxiliary equation is r2 + a2 = 0, with roots r = ± −a2 = ± a2 i. The general solution √ √ to the diﬀerential equation in this case is y (x) = c1 cos a2 x + c2 sin a2 x , which is clearly bounded for all x ∈ R, since the sine and cosine functions are both bounded between ±1 in value. 41. All the roots must have a negative real part. 42. Multiplication of polynomial diﬀerential operators is identical to the multiplication of polynomials in general, an operation that is widely known to be commutative: P (D)Q(D) = Q(D)P (D). 43. This is the special case of Problem 44 for m = 1. Therefore, the solution to Problem 44 can be applied to solve this exercise as well. 44. The given equations are linearly independent if and only if m−1 m ck xk eax cos bx + k=0 dk xk eax sin bx = 0 =⇒ ck = 0 and dk = 0 k=0 for all k ∈ {0, 1, 2, ..., m}. If we let m m ck xk and Q(x) = P (x) = k=0 dk xk k=0 496 then the ﬁrst equation can be written as eax [P (x) cos bx + Q(x) sin bx] = 0, which is equivalent to P (x) cos bx + Q(x) sin bx = 0. nπ nπ = 0 for all integers where n is an integer implies cos bx = ±1 and sin bx = 0 so P b b (2n + 1)π n. Also if x = , where n is an integer, then cos bx = 0 and sin bx = ±1 which implies that b (2n + 1)π Q = 0 for all integers n. This in turn implies that P (x) and Q(x) are identically zero so ck = 0 b and dk = 0 for all k ∈ {0, 1, 2, ..., m}. Thus the given functions are linearly independent. Now x = 45. y (x) = c1 e−5x + c2 e−7x + c3 e19x . 46. y (x) = c1 e4x + c2 e−4x + e−2x (c3 cos 3x + c4 sin 3x). √ √ √ √ 47. y (x) = e−5x/2 [c1 cos ( 11x/2) + c2 sin ( 11x/2)] + e−3x/2 [c3 cos ( 7x/2) + c4 sin ( 7x/2)]. 48. y (x) = c1 e−4x + c2 cos 5x + c3 sin 5x + x(c4 cos 5x + c5 sin 5x). 49. y (x) = c1 e−3x + c2 cos x + c3 sin x + x(c4 cos x + c5 sin x) + x2 (c6 cos x + c7 sin x). Solutions to Section 6.3 True-False Review: 1. FALSE. Under the given assumptions, we have A1 (D)F1 (x) = 0 and A2 (D)F2 (x) = 0. However, this means that (A1 (D) + A2 (D))(F1 (x) + F2 (x)) = A1 (D)F1 (x) + A1 (D)F2 (x) + A2 (D)F1 (x) + A2 (D)F2 (x) = A1 (D)F2 (x) + A2 (D)F1 (x), which is not necessarily zero. As a speciﬁc example, if A1 (D) = D − 1 and A2 (D) = D − 2, then A1 (D) annihilates F1 (x) = ex and A2 (D) annihilates F2 (x) = e2x . However, A1 (D) + A2 (D) = 2D − 3 does not annihilate ex + e2x . 2. FALSE. The annihilator of F (x) in this case is A(D) = Dk+1 , since it takes k + 1 derivatives in order to annihilate xk . 3. TRUE. We apply rule 1 in this section with k = 1, or we can compute directly that (D − a)2 annihilates xeax . 4. FALSE. Some functions cannot be annihilated by a polynomial diﬀerential operator. Only those of the forms listed in 1-4 can be annihilated. For example, F (x) = ln x does not have an annihilator. 5. FALSE. For instance, if F (x) = x, A1 (D) = A2 (D) = D, then although A1 (D)A2 (D)F (x) = 0, neither A1 (D) nor A2 (D) annihilates F (x) = x. 6. FALSE. The annihilator of F (x) = 3 − 5x is D2 , but since r = 0 already occurs twice as a root of the auxiliary equation, the appropriate trial solution here is yp (x) = A0 x2 + A0 x3 . 497 7. FALSE. The annihilator of F (x) = x4 is D5 , but since r = 0 already occurs three times as a root of the auxiliary equation, the appropriate trial solution here is yp (x) = A0 x3 + A1 x4 + A2 x5 + A3 x6 + A4 x7 . 8. TRUE. The annihilator of F (x) = cos x is D2 + 1, but since r = ±i already occurs once as a complex conjugate pair of roots of the auxiliary equation, the appropriate trial solution is not yp (x) = A0 cos x + B0 sin x; we must multiply by a factor of x to occur for the fact that r = ±i is a pair of roots of the auxiliary equation. Problems: Note: In Problems 1-16, we use the four boxed formulas on pages 473-474 of the text. 1. A(D) = D2 (D − 1); (D − 1)(2ex ) = 0 and D2 (3x) = 0 =⇒ D2 (D − 1)(2ex − 3x) = 0. 2. A(D) = D + 3; (D + 3)(5e−3x ) = −15e−3x + 15e−3x = 0. 3. A(D) = (D − 7)4 (D2 + 16); (D − 7)4 (x3 e7x ) = 0 and (D2 + 16)(5 sin 4x) = 0 =⇒ (D − 7)4 (D2 + 16)(x3 e7x + 5 sin 4x) = 0. 4. A(D) = (D2 + 1)(D − 2)2 ; (D2 + 1)(sin x) = 0 and (D − 2)2 (3xe2x ) = 0 =⇒ (D2 + 1)(D − 2)2 (sin x + 3xe2x ) = 0. 5. A(D) = (D2 − 2D + 5)(D2 + 4); In the expression ex sin 2x, a = 1 and b = 2, so that D2 − 2aD + (a2 + b2 ) = D2 − 2D + 5 =⇒ (D2 − 2D + 5)(ex sin 2x) = 0. Moreover, in the expression 3 cos 2x, a = 0 and b = 2, so that D2 − 2aD + (a2 + b2 ) = D2 + 4 =⇒ (D2 + 4)(3 cos 2x) = 0. Thus we conclude that (D2 − 2D + 5)(D2 + 4)(ex sin 2x + 3 cos 2x) = 0. 6. A(D) = D2 + 4D + 5; In the expression 4e−2x sin x, a = −2 and b = 1 so that D2 − 2aD + (a2 + b2 ) = D2 + 4D + 5 =⇒ (D2 + 4D + 5)(4e−2x sin x) = 0. 7. A(D) = (D2 − 10D + 26)3 ; In the expression e5x cos x, a = 5 and b = 1 so that D2 − 2aD + (a2 + b2 ) = D2 − 10D + 26 =⇒ (D2 − 10D + 26)3 (e5x (2 − x) cos x) = 0. 8. A(D) = D3 (D − 4)2 ; Note that D3 (2x2 ) = 0 and (D − 4)2 [(1 − 3x)e4x ] = 0 =⇒ D3 (D − 4)2 ((1 − 3x)e4x + 2x2 ) = 0. 9. A(D) = (D − 4)2 (D2 − 8D + 41)D2 (D2 + 4D + 5)3 ; Note that (D−4)2 (xe4x ) = 0, (D2 −8D+41)(−2e4x sin 5x) = 0, D2 (3x) = 0, and (D2 +4D+5)3 (x2 e−2x cos x) = 0. Therefore, (D − 4)2 (D2 − 8D + 41)D2 (D2 + 4D + 5)3 (e4x (x − 2 sin 5x) + 3x − x2 e−2x cos x) = 0. 10. A(D) = D2 + 6D + 10; In the expression 2e−3x sin x + 7e−3x cos x, a = −3 and b = 1 so that D2 − 2aD + (a2 + b2 ) = D2 + 6D + 10 =⇒ (D2 + 6D + 10)(e−3x (2 sin x + 7 cos x)) = 0. 11. A(D) = (D2 + 9)2 ; Here we have applied Rule 3 (page 474) with a = 0, b = 3, and k = 1. 12. A(D) = (D2 + 1)3 ; Here we have applied Rule 3 (page 474) with a = 0, b = 1, and k = 2. 498 13. First we simplify sin4 x = (sin2 x)2 = 1 − cos 2x 2 2 = 1 1 + cos 4x 3 − 4 cos 2x + cos 4x 1 − 2 cos 2x + = . 4 2 8 3 1 We use A1 (D) = D to annihilate , A2 (D) = D2 + 4 to annihilate cos 2x, and A3 (D) = D2 + 16 to 8 2 1 annihilate cos 4x. Hence, A(D) = A1 (D)A2 (D)A3 (D) = D(D2 + 4)(D2 + 16) = D5 + 20D3 + 64D. 8 1 1 + cos 2x . We use A1 (D) = D to annihilate and we use 14. Following the hint, we write cos2 x = 2 2 1 A2 (D) = D2 + 4 to annihilate (cos 2x). Hence, A(D) = A1 (D)A2 (D) = D3 + 4D. 2 15. To ﬁnd the annihilator of F (x), let us use the identities cos2 x = 1 + cos 2x 2 and sin2 x = 1 − cos 2x 2 to rewrite the formula for F (x): F (x) = sin2 x cos2 x cos2 2x 1 − cos 2x 1 + cos 2x 1 + cos 4x = · · 2 2 2 1 = (1 − cos2 2x)(1 + cos 4x) 8 1 = sin2 2x(1 + cos 4x) 8 1 = (1 − cos 4x)(1 + cos 4x) 16 1 = (1 − cos2 4x) 16 1 1 1 1 sin2 4x = (1 − cos 8x) = − cos 8x. = 16 32 32 32 1 1 The annihilator for is A1 (D) = D, and the annihilator for cos 8x is A2 (D) = D2 + 64. Therefore, the 32 32 annihilator for F (x) is A(D) = A1 (D)A2 (D) = D(D2 + 64) = D3 + 64D. 16. To ﬁnd the annihilator of F (x), let us use the identities cos2 x = 1 + cos 2x 2 sin2 x = and 1 − cos 2x 2 and to rewrite the formula for F (x): 1 sin x cos x(1 + cos 2x) 2 1 = (sin 2x)(1 + cos 2x) 4 1 1 = sin 2x + (sin 2x)(cos 2x) 4 4 1 1 = sin 2x + sin 4x. 4 8 F (x) = sin 2x = 2 sin x cos x 499 1 1 sin 2x is A1 (D) = D2 +4, and the annihilator of sin 4x is A2 (D) = D2 +16. Therefore, 4 8 the annihilator for F (x) is A(D) = A1 (D)A2 (D) = (D2 + 4)(D2 + 16) = D4 + 20D2 + 64. The annihilator of 17. We have P (r) = r2 +4r +4 =⇒ yc (x) = c1 e−2x + c2 xe−2x . Now, A(D) = (D +2)2 . Operating on the given diﬀerential equation with A(D) yields the homogeneous diﬀerential equation (D + 2)4 y = 0, with solution y (x) = c1 e−2x +c2 xe−2x +A0 x2 e−2x +A1 x3 e−2x . Therefore, yp (x) = A0 x2 e−2x +A1 x3 e−2x . Diﬀerentiating yp yields yp = A0 e−2x (−2x2 +2x)+ A1 e−2x (−2x3 +3x2 ), yp = A0 e−2x (4x2 − 8x +2)+ A1 e−2x (4x3 − 12x2 +6x). Substituting into the given diﬀerential equation and simplifying yields 2A0 + 6A1 x = 5x, so that A0 = 5 5 5 0, A1 = . Hence, yp (x) = x3 e−2x , and therefore y (x) = c1 e−2x + c2 xe−2x + x3 e−2x . 6 6 6 18. We have P (r) = r2 + 1 =⇒ yc (x) = c1 cos x + c2 sin x. Now, A(D) = D − 1. Operating on the given diﬀerential equation with A(D) yields (D − 1)(D2 + 1)y = 0, with solution y (x) = c1 cos x + c2 sin x + A0 ex . Hence, yp (x) = A0 ex . Substitution into the given diﬀerential equation yields A0 = 3, so that yp (x) = 3ex . Consequently, y (x) = c1 cos x + c2 sin x + 3ex . 19. We have P (r) = (r − 2)(r + 1) =⇒ yc (x) = c1 e2x + c2 e−x . Now, A(D) = D − 2. Operating on the given diﬀerential equation with A(D) yields (D −2)2 (D +1) = 0 with general solution y (x) = c1 e2x +c2 e−x +A0 xe2x . We therefore choose yp (x) = A0 xe2x . Diﬀerentiating yp yields yp (x) = A0 e2x (2x +1), yp (x) = A0 e2x (4x +4). 5 5 Substituting into the given diﬀerential equation and simplifying we ﬁnd A0 = . Hence, yp (x) = xe2x , and 3 3 5 so y (x) = c1 e2x + c2 e−x + xe2x . 3 20. We have P (r) = r2 + 16 =⇒ yc (x) = c1 cos 4x + c2 sin 4x. Now, A(D) = D2 + 1. Operating on the given diﬀerential equation with A(D) yields (D2 + 1)(D2 + 16)y = 0, so that y (x) = c1 cos 4x + c2 sin 4x + A0 cos x + B0 sin x. We therefore choose yp (x) = A0 cos x + B0 sin x. Substitution into the given diﬀerential 4 equation and simpliﬁcation yields 15A0 cos x + 15B0 sin x = 4 cos x, so that A0 = , B0 = 0. Hence, 15 4 4 yp (x) = cos x, so that y (x) = c1 cos 4x + c2 sin 4x + cos x. 15 15 21. We have P (r) = (r − 2)(r − 3) =⇒ yc (x) = c1 e2x + c2 e3x . Moreover, the annihilator of F (x) = 7e2x is A(D) = D − 2. Operating on the diﬀerential equation with A(D) gives (D − 2)2 (D − 3)y = 0, which has solution y (x) = c1 e2x + c2 e3x + A0 xe2x . Therefore, our trial solution is yp (x) = A0 xe2x . We must solve for A0 . Substituting the expression for yp (x) into the diﬀerential equation and simpliﬁcation yields A0 = −7, so yp (x) = −7xe2x . Hence, the general solution to the given diﬀerential equation is y (x) = c1 e2x + c2 e3x − 7xe2x . 22. We have P (r) = r2 + 2r + 5 =⇒ yc (x) = e−x [c1 cos 2x + c2 sin 2x]. Now, A(D) = D2 + 4. Operating on the given diﬀerential equation with A(D) yields (D2 + 4)(D2 + 2D + 5)y = 0 with general solution y (x) = e−x (c1 cos 2x + c2 sin 2x) + A0 cos 2x + B0 sin 2x. We therefore choose yp (x) = A0 cos 2x + B0 sin 2x. Substitution into the given diﬀerential equation and simpliﬁcation yields (A0 + 4B0 ) cos 2x + (−4A0 + B0 ) sin 2x = 12 3 3 sin 2x. Consequently A0 + 4B0 = 0, −4A0 + B0 = 3. This system has a solution A0 = − , B0 = . 17 17 12 3 12 3 Hence, yp (x) = − cos 2x + sin 2x, and so y (x) = e−x (c1 cos 2x + c2 sin 2x) − cos 2x + sin 2x. 17 17 17 17 500 √ √ 23. We have P (r) = r2 + 6 =⇒ yc (x) = c1 cos 6x + c2 sin 6x. Next we must determine the annihilator 1 1 1 1 of F (x) = sin2 x cos2 x = (1 − cos 2x)(1 + cos 2x) = (1 − cos2 2x) = sin2 2x = (1 − cos 4x) = 4 4 4 8 11 − cos 4x, which is A(D) = D(D2 + 16). Operating on the given diﬀerential equation with A(D) yields 88 √ √ D(D2 +16)(D2 +6)y = 0, which has general solution y (x) = c1 cos 6x+c2 sin 6x+A0 +B0 cos 4x+C0 sin 4x. We therefore choose yp (x) = A0 + B0 cos 4x + C0 sin 4x. Substitution into the given diﬀerential equation and 11 1 1 simpliﬁcation yields −10B0 cos 4x − 10C0 sin 4x + 6A0 = − cos 4x. Therefore, A0 = , B0 = , and 88 48 80 1 1 C0 = 0. Therefore, yp (x) = + cos 4x. Thus, the general solution to the given diﬀerential equation is 48 80 y (x) = c1 cos √ √ 1 1 6x + c2 sin 6x + + cos 4x. 48 80 24. We have P (r) = (r + 3)(r − 1) =⇒ yc (x) = c1 e−3x + c2 ex . Next we must determine the annihilator of 1 − cos 2x F (x) = sin2 x = , which is A(D) = D(D2 + 4) = D3 + 4D. Operating on the given diﬀerential 2 equation with the annihilator A(D) yields D(D2 + 4)(D + 3)(D − 1)y = 0, which has the general solution y (x) = c1 e−3x + c2 ex + A0 + B0 cos 2x + C0 sin 2x. We therefore choose yp (x) = A0 + B0 cos 2x + C0 sin 2x. 1 7 Substitution into the given diﬀerential equation and simpliﬁcation yields A0 = − , B0 = , and C0 = 6 130 2 1 7 2 − . Therefore yp (x) = − + cos 2x − sin 2x. Thus, the general solution to the given diﬀerential 65 6 130 65 equation is 1 7 2 y (x) = c1 e−3x + c2 ex − + cos 2x − sin 2x. 6 130 65 25. We have P (r) = r3 + 2r2 − 5r − 6 = (r − 2)(r + 3)(r + 1) =⇒ yc (x) = c1 e2x + c2 e−3x + c3 e−x . Now, A(D) = D3 . Operating on the given diﬀerential equation with A(D) yields D3 (D − 2)(D + 3)(D + 1)y = 0, with general solution y (x) = c1 e2x + c2 e−3x + c3 e−x + A0 + A1 x + A2 x2 . We therefore choose yp (x) = A0 + A1 x + A2 x2 . Substituting into the given diﬀerential equation and simplifying yields 4A2 − 5(A1 + 2A2 x) − 6(A0 + A1 x + A2 x2 ) = 4x2 , 37 so that 4A2 − 5A1 − 6A0 = 0, −10A2 − 6A1 = 0, −6A2 = 4. This system has a solution A0 = − , A1 = 27 10 2 37 10 2 37 10 2 , A2 = − . Hence, yp (x) = − + x − x2 , and so y (x) = c1 e2x + c2 e−3x + c3 e−x − + x − x2 . 9 3 27 9 3 27 9 3 26. We have P (r) = (r − 1)(r2 + 1) =⇒ yc (x) = c1 ex + c2 cos x + c3 sin x. Now, A(D) = D + 1. Operating on the given diﬀerential equation with A(D) yields (D + 1)(D − 1)(D2 + 1)y = 0, with general solution y (x) = c1 ex + c2 cos x + c3 sin x + A0 e−x . We therefore choose yp (x) = A0 e−x . Substitution into the given 9 9 diﬀerential equation and simpliﬁcation yields A0 = − . Hence, yp (x) = − e−x , and so the general solution 4 4 9 is y (x) = c1 ex + c2 cos x + c3 sin x − e−x . 4 27. We have P (r) = (r + 1)3 =⇒ yc (x) = c1 e−x + c2 xe−x + c3 x2 e−x . Now, A(D) = (D − 2)(D + 1). Operating on the given diﬀerential equation with A(D) yields (D − 2)(D + 1)4 y = 0, with general solution y (x) = c1 e−x + c2 xe−x + c3 x2 e−x + A0 x3 e−x + A1 e2x . We therefore choose yp (x) = A0 x3 e−x + A1 e2x . 501 Diﬀerentiating yp yields yp = A0 e−x (−x3 + 3x2 ) + 2A1 e2x , yp = A0 e−x (x3 − 6x2 + 6x) + 4A1 e2x , yp = A0 e−x (−x3 + 9x2 − 18x + 6) + 8A1 e2x . Substituting into the given diﬀerential equation and simplifying we ﬁnd 6A0 e−x + 27A1 e2x = 2e−x + 3e2x , so 1 1 1 1 that A0 = , A1 = . Hence, yp (x) = x3 e−x + e2x , and therefore, the general solution to the diﬀerential 3 9 3 9 1 3 −x 1 2x −x −x 2 −x equation is y (x) = c1 e + c2 xe + c3 x e + x e + e . 3 9 28. We have P (r) = (r − 1)(r − 2)(r − 3) =⇒ yc (x) = c1 ex + c2 e2x + c3 e3x . An appropriate trial solution is yp (x) = A0 e4x . Expanding the diﬀerential operators in the given diﬀerential equation gives y − 6y + 11y − 6y = 6e4x . Substituting for yp into this diﬀerential equation and simplifying yields A0 = 1, so that yp (x) = e4x . Hence, y (x) = c1 ex + c2 e2x + c3 e3x + e4x . Imposing the given initial conditions leads to the following system: c1 + c2 + c3 = 3, c1 + 2c2 + 3c3 = 6, c1 + 4c2 + 9c3 = 14. 1001 The reduced row-echelon form of the augmented matrix of this system is 0 1 0 1 , so that c1 = c2 = 0011 c3 = 1. Consequently, the solution to the initial-value problem is y (x) = ex + e2x + e3x + e4x . 29. We have P (r) = r3 − 2r2 − r + 2 = (r − 1)(r + 1)(r − 2) =⇒ yc (x) = c1 ex + c2 e−x + c3 e2x . We take a trial solution of the form yp (x) = A0 e3x . Substitution into the diﬀerential equation yields A0 (27 − 18 − 3 + 2) = 4, 1 1 so that yp (x) = e3x . Consequently, y (x) = c1 ex + c2 e−x + c3 e2x + e3x . 2 2 30. We have P (r) = (r + 4)(r − 4)(r − 3) =⇒ yc (x) = c1 e−4x + c2 e4x + c3 e3x . We need the modiﬁed trial solution: yp (x) = A0 xe3x . Diﬀerentiating yp yields yp = A0 e3x + 3A0 xe3x yp (x) = 6A0 e3x + 9A0 xe3x , yp = 27A0 e3x + 27A0 xe3x . Substituting these expressions into the given diﬀerential equation and simpliﬁcation 6 6 leads to A0 = − . Hence, y (x) = c1 e−4x + c2 e4x + c3 e3x − xe3x . 7 7 31. We have P (r) = (r − 2)(r + 3)(r + 2) =⇒ yc (x) = c1 e2x + c2 e−3x + c3 e−2x . An appropriate trial solution is yp (x) = A0 cos x + B0 sin x. Substitution into the given diﬀerential equation and simpliﬁcation leads to (5A0 − 15B0 ) sin x − (15A0 + 5B0 ) cos x = 4 cos x. Consequently, A0 and B0 must satisfy: A0 − 3B0 = 6 2 0, 15A0 + 5B0 = −4. This system has a solution A0 = − , B0 = − . Hence, the general solution to 25 25 6 2 the diﬀerential equation is y (x) = c1 e2x + c2 e−3x + c3 e−2x − cos x − sin x. 25 25 32. We have P (r) = r2 (r2 + 1) =⇒ yc (x) = c1 + c2 x + c3 cos x + c4 sin x. We must use the modiﬁed trial solution yp (x) = A0 x2 + A1 x3 . The diﬀerential equation is y (iv) + y = 6 − 2x. Substituting the trial 1 solution into this diﬀerential equation yields 2A0 + 6A1 x = 6 − 2x, so that A0 = 3 and A1 = . Hence, 3 1 1 yp (x) = 3x2 − x3 , and therefore y (x) = c1 + c2 x + c3 cos x + c4 sin x + 3x2 − x3 . 3 3 33. We have P (r) = (r + 1)3 =⇒ yc (x) = c1 e−x + c2 xe−x + c3 x2 e−x . We must use the modiﬁed trial solution yp (x) = A0 x3 e−x . Diﬀerentiating this trial solution gives yp = A0 e−x (−x3 + 3x2 ), yp = A0 e−x (x3 − 6x2 + 6x), yp = A0 e−x (−x3 + 9x2 − 18x + 6). 502 The given diﬀerential equation is y + 3y + 3y + y = 0. Substituting the trial solution into this diﬀerential 5 5 equation and simplifying yields A0 = . Hence, yp (x) = x3 e−x , and therefore the general solution to the 6 6 5 diﬀerential equation is y (x) = e−x (c1 + c2 x + c3 x2 + x3 ). 6 34. We have P (r) = (r + 1)(r2 + 1) =⇒ yc (x) = c1 e−x + c2 cos x + c3 sin x =⇒ yp (x) = ex (A0 + A1 x). 35. We have P (r) = (r2 + 4r + 13)2 =⇒ yc (x) = e−2x [c1 cos 3x + c2 sin 3x + x(c3 cos 3x + c4 sin 3x)] =⇒ yp (x) = x2 e−2x (A0 cos 3x + B0 sin 3x). 36. We have P (r) = (r2 + 4)(r − 2)3 =⇒ yc (x) = c1 cos 2x + c2 sin 2x + e2x (c3 + c4 x + c5 x2 ) =⇒ yp (x) = A0 + A1 x + x3 e2x (A2 + A3 x). 37. We have P (r) = r2 (r − 1)(r2 + 4)2 =⇒ yc (x) = c1 + c2 x + c3 ex + c4 cos 2x + c5 sin 2x + x(c6 cos 2x + c7 sin 2x) =⇒ yp (x) = A0 xex + x2 (A1 cos 2x + A2 sin 2x). 38. We have P (r) = (r2 − 2r + 2)3 (r − 2)2 (r + 4) =⇒ yc (x) = ex [c1 cos x + c2 sin x + x(c3 cos x + c4 sin x) + x2 (c5 cos x + c6 sin x)] + c7 e2x + c8 xe2x + c9 e−4x =⇒ yp (x) = x3 ex (A0 cos x + A1 sin x) + A2 x2 e2x . 39. We have P (r) = r(r2 − 9)(r2 − 4r + 5) =⇒ yc (x) = c1 + c2 e3x + c3 e−3x + e2x (c4 cos x + c5 sin x) =⇒ yp (x) = A0 xe3x + xe2x (A1 cos x + A2 sin x). 40. Consider P (D)y = cxk eax cos bx (40.1) and let yc denote the complementary function. The appropriate annihilator for (40.1) is A(D) = [D2 − 2aD + (a2 + b2 )]k+1 and so a trial solution for (40.1) can be determined from the general solution to A(D)P (D)y = 0. (40.2) Two cases arise: Case 1: If r = a + ib is not a root of P (r) = 0, then the general solution of (40.2) will be of the form y (x) = yc (x) + eax [(A0 + A1 x + ... + Ak xk ) cos bx + (B0 + B1 x + ... + Bk xk ) sin bx] so that an appropriate trial solution is yp (x) = eax [(A0 + A1 x + ... + Ak xk ) cos bx + (B0 + B1 x + ... + Bk xk ) sin bx]. Case 2: If r = a + ib is a root of multiplicity m of P (r) = 0, then the complementary function yc will contain the terms eax [(c0 + c1 x + ... + cm−1 xm−1 ) cos bx + (d0 + d1 x + ... + dm−1 xm−1 ) sin bx]. The operator A(D)P (D) will therefore contain the term [D2 − 2aD + (a2 + b2 )]m+k+1 , so that the terms in the general solution to (40.2) that do not arise in the complementary function are yp (x) = xm eax [(A0 + A1 x + ... + Ak xk ) cos bx + (B0 + B1 x + ...Bk xk ) sin bx]. 503 Solutions to Section 6.4 True-False Review: 1. TRUE. 2. TRUE. 3. FALSE. An appropriate complex-valued trial solution is yp (x) = A0 xeix . 4. TRUE. 5. FALSE. An appropriate complex-valued trial solution is yp (x) = A0 x2 e(2+5i)x . 6. TRUE. Problems: 1. Consider z + 2z + z = 50e3ix . Let zp (x) = Ae3ix , zp = 3Aie3ix , and zp = −9Ae3ix . Substituting, we obtain −9Ae3ix + 2(3Aie3ix ) + Ae3ix = 50e3ix =⇒ A = −4 − 3i. Hence, zp (x) = (−4 − 3i)e3ix = (−4 − 3i)(cos 3x + i sin 3x) = (6 sin 3x − 8 cos 3x) + i(−4 sin 3x − 3 cos 3x). Consequently, yp (x) = Im(zp ) = −3 cos 3x − 4 sin 3x. 2. Consider the complex equation z − z = 10e(2+i)x . Let zp (x) = A0 e(2+i)x . Substituting we get A0 (3 + 4i)e(2+i)x − A0 e(2+i)x = 10e(2+i)x =⇒ A0 = 1 − 2i. Hence, zp (x) = (1 − 2i)e(2+i)x = e2x [(cos x + 2 sin x) + i(sin x − 2 cos x)] so that yp (x) = Re(zp ) = e2x (cos x + 2 sin x). 3. Consider z + 4z + 4z = 169e3ix . Let zp (x) = A0 e3ix . Substituting we get (−9 + 12i + 4)A0 = 169 =⇒ A0 = −5 − 12i. Hence, zp = (−5 − 12i)(cos 3x + i sin 3x) so that yp (x) = Im(zp ) = −12 cos 3x − 5 sin 3x. 4. Consider z − z − 2z = 20 − 20e2ix . Letting zp (x) = A + Be2ix , substituting back into the diﬀerential equation, and solving we get A = −10 and B = 3 − i. Hence, zp (x) = −10 + (3 − i)(cos 2x + i sin x) 504 so that yp (x) = Re(zp ) = −10 + (3 cos 2x + sin 2x). 5. Consider z + z = 3e(1+2i)x . Let zp (x) = A0 e(1+2i)x . Substituting into the diﬀerential equation we get 3 [(−3 + 4i) + 1]A0 = 3 =⇒ A0 = − (1 + 2i). Hence, 10 zp (x) = − 3 (1 + 2i)ex (cos 2x + i sin 2x) 10 so that yp (x) = Re(zp ) = 3x e (2 sin 2x − cos 2x). 10 6. Consider z + 2z + 2z = 2e(−1+i)x . Let zp (x) = A0 xe(−1+i)x . Substituting into the diﬀerential equation we get A0 = −i. Hence, zp (x) = −ixe(−1+i)x = xe−x sin x + i(−xe−x cos x), so that yp (x) = Im(zp ) = −xe−x cos x. 7. Consider z − 4z = 100xe(1+i)x . Let zp (x) = (A0 + A1 x)e(1+i)x . Substituting and solving the diﬀerential equation we get A0 = 2 − 14i and A1 = −20 − 10i. Hence, zp (x) = ex [(2 − 14i) + (−20 − 10i)x](cos x + i sin x) so that yp (x) = Im(zp ) = ex [(−14 cos x + 2 sin x) − 10x(2 sin x + cos x)]. 8. Consider z + 2z + 5z = 4e(−1+2i)x . Let zp (x) = A0 xe(−1+2i)x . Substituting into the diﬀerential equation and solving yields A0 = −i. Hence, zp (x) = −ixe(−1+2i)x = xe−x sin 2x − ixe−x cos 2x so that yp (x) = Re(zp ) = xe−x sin 2x. 9. Consider z − 2z + 10z = 24e(1+3i)x . Let zp (x) = Axe(1+3i)x . Substituting into the diﬀerential equation and solving we get A = −4i. Hence, zp (x) = −4ixe(1+3i)x so that yp (x) = Re(zp ) = 4xex sin 3x. e4ix + e−4ix e4ix − e−4ix −8 = (8 + 4i)e4ix + (8 − 4i)e−4ix . 2 2i Now consider z + 16z = 34ex + (8 + 4i)e4ix + (8 − 4i)e−4ix . Since ±4i is a root of the auxiliary equation, a 10. We can write 16 cos 4x − 8 sin 4x = 16 505 reasonable choice for zp is zp (x) = Aex + Bxe4ix + Cxe−4ix . Substituting into the diﬀerential equation and 1 1 solving, we get A = 2, B = − i, and C = + i. Hence, 2 2 zp (x) = 2ex + 1 − i xe4ix + 2 1 + i xe−4ix = 2ex + x cos 4x + 2x sin 4x. 2 Therefore, yp (x) = 2ex + x cos 4x + 2x sin 4x. 2 11. Consider z + ω0 z = F0 eωit . Case 1: If ω = ω0 let zp (t) = Aeωit . Substituting into the diﬀerential equation and solving we obtain F0 F0 F0 A= 2 . Hence, zp (t) = A(cos ωt + i sin ωt) = 2 cos ωt + i 2 sin ωt. Now yp (t) = Re(zp ) 2 2 ωo − ω ω0 − ω ω0 − ω 2 F0 so yp (t) = 2 cos ωt. ω0 − ω 2 Case 2: If ω = ω0 let zp (t) = Ateω0 it . Substituting into the diﬀerential equation and solving we get A = − so zp (t) = − F0 i ω0 it F0 t te = sin ω0 t − i 2ω 0 2ω0 F0 i 2ω0 F0 t F0 t cos ω0 t . Hence, yp (t) = Re(zp ) so yp (t) = sin ω0 t. 2ω0 2ω0 Solutions to Section 6.5 True-False Review: 1. TRUE. This is reﬂected in the negative sign appearing in Hooke’s Law. The spring force Fs is given by Fs = −kL0 , where k > 0 and L0 is the displacement of the spring from its equilibrium position. The spring force acts in a direction opposite to that of the displacement of the mass. 2. FALSE. The circular frequency is the square root of k/m: ω0 = k . m 3. TRUE. The frequency of oscillation, denoted f in the text, and the period of oscillation, denoted T in the text, are inverses of one another: f T = 1. For instance, if a system undergoes f = 3 oscillations in one 1 second, then each oscillation takes one-third of a second, so T = 3 . 4. TRUE. This is mathematically seen from Equation (6.5.14) in which the exponential factor e−ct/2m decays to zero for large t. Therefore, y (t) becomes smaller and smaller as t increases. This is depicted in Figure 6.5.5. 5. FALSE. In the cases of critical damping and overdamping, the system cannot oscillate. In fact, the system passes through the equilibrium position at most once in these cases. 6. TRUE. The frequency of the driving term is denoted by ω , while the circular frequency of the spring-mass system is denoted by ω0 . Case 1(b) in this section describes resonance, the situation in which ω = ω0 . 7. TRUE. Air resistance tends to dampen the motion of the spring-mass system, since it acts in a direction opposite to that of the motion of the spring. This is reﬂected in Equation (6.5.4) by the negative sign appearing in the formula for the damping force Fd . 506 8. FALSE. From the formula m k T = 2π for the period of oscillation, we see that for a larger mass, the period is larger (i.e. longer). 9. TRUE. We see in the section on free oscillations of a mechanical system, we see that the resulting motion of the mass is given by (6.5.11), (6.5.12), or (6.5.13), and in all of these cases, the amplitude of this system is bounded. Only in the case of forced oscillations with resonance can the amplitude increase without bound. Problems: d2 y + 4y = 0 =⇒ r2 + 4 = 0 =⇒ r = ±2i =⇒ y (t) = A cos 2t + B sin 2t and y (t) = dt2 −2A sin 2t + 2B cos 2t. Now y (0) = 2 =⇒ A = 2 and y (0) = 4 =⇒ B = 2. Thus, y (t) = 2 cos 2t + 2 sin 2t. √ √ Amplitude: A0 = 22 + 22 = 2 2, Natural frequency: ω0 = 2, π Phase of the motion: φ = tan−1 (1) = , 4 2π Period of oscillation: T = = π. ω0 1. We have 2. d2 y 2 2 + ω0 y = 0 =⇒ r2 + ω0 = 0 =⇒ r = ±ω0 i =⇒ dt2 y (t) = A cos ω0 t + B sin ω0 t Now y (0) = y0 =⇒ A = y0 and y (0) = v0 =⇒ B = v0 ω0 Natural frequency: ω0 = ω0 , Phase of the motion: sin φ = Amplitude: A0 = 2 y0 + v0 v0 . Thus, y (t) = y0 cos ωt + sin ωo t. ω0 ω0 2 , y0 2 y0 + Period of oscillation: T = y (t) = −ωo A sin ω0 t + ω0 B cos ω0 t. and v0 ω0 2 v0 , cos φ = ω0 2 y0 + vo ω0 2 , 2π . ω0 3. (a) 3 N = k (1m) =⇒ k = 3 N/m. √ √ d2 y 3 + y = 0 =⇒ y = A cos ( 3t/2) + B sin ( 3t/2). Since y (0) = −1, 2 dt 4√ √ √ √ 3 3 1 we conclude that A = −1. Now y (t) = −A sin ( 3t/2) + B cos ( 3t/2) and y (0) = − , so that 2 2 √ √2 √ √ 3 3 B=− . Thus, y (t) = − cos ( 3t/2) − sin ( 3t/2). 3 3 √ √ Amplitude: A0 = (−1)2 + (− 3/3)2 = 2 3/3, √ 3 1 5π Phase of the motion: cos φ = − , sin φ = =⇒ φ = , 2√ 2 6 2π 4 3π Period of oscillation: T = = sec. ω0 3 (b) ω0 = k /m = 3/4 = √ 3/2. 507 4. We have P (r) = r2 + 2r + 5 = 0 =⇒ r = −1 ± 2i =⇒ y (t) = e−t (A cos 2t + B sin 2t) and y (t) = e−t [(−2A − B ) sin 2t + (−A + 2B ) cos 2t]. Thus, y (0) = 1 =⇒ A = 1 and y (0) = 3 =⇒ B = 2. Therefore, y (t) = e−t (cos 2t + 2 sin 2t). The motion is under-damped. y t Figure 78: Figure for Problem 4 5. We have P (r) = r2 + 3r + 2 = 0 =⇒ r = −1 or r = −2 =⇒ y (t) = Ae−t + Be−2t and y (t) = −Ae−t − 2Be−2t . Thus y (0) = 1 and y (0) = 0 =⇒ A + B = 1 and −A − 2B = 0 =⇒ A = 2 and B = −1. Therefore, y (t) = 2e−t − e−2t . The motion is over-damped. y t Figure 79: Figure for Problem 5 6. We have P (r) = 4r2 + 12r + 5 = 0 =⇒ r = −1/2 or r = −5/2 =⇒ y (t) = Ae−t/2 + Be−5t/2 Since y (0) = 1 and y (0) = 1 Therefore, y (t) = − e−t/2 + 4 and 5B −5t/2 A e . y (t) = − e−t/2 − 2 2 1 5 1 5 −3, we have A + B = 1 and − A − B = −3 =⇒ A = − and B = . 2 2 4 4 5 −5t/2 e . The motion is over-damped. 4 508 y t Figure 80: Figure for Problem 6 7. We have P (r) = r2 + 2r + 1 = 0 =⇒ r = −1 (with multiplicity 2) =⇒ y (t) = Ae−t + Bte−t and y (t) = −Ae−t + (1 − t)Bte−t . Since y (0) = −1 and y (0) = 2, we ﬁnd that A + 0 = −1 and −A + B = 2 =⇒ A = −1 and B = 1. Therefore, y (t) = −e−t + te−t . The motion is critically-damped. y t Figure 81: Figure for Problem 7 8. We have P (r) = 4r2 + 4r + 1 = 0 =⇒ r = −1/2 (with multiplicity 2) =⇒ y (t) = Ae−t/2 + Bte−t/2 and A y (t) = − e−t/2 + B (1 − t/2)e−t/2 . 2 1 Since y (0) = 4 and y (0) = −1, we ﬁnd that A = 4 and − A + B = −1 =⇒ B = 1. Therefore, y (t) = 2 4e−t/2 + te−t/2 . The motion is critically-damped. y t Figure 82: Figure for Problem 8 9. We have P (r) = r2 + 4r + 7 = 0 =⇒ −2 ± y (t) = e−2t (A cos √ √ 3t + B sin 3t) and √ 3i =⇒ √ √ √ y (t) = e−2t [(− 3A − 2B ) sin 3t + (−2A + 3B ) cos 3t]. 509 √ √ 10 3 Since y (0) = 2 and y (0) = −6, we ﬁnd that A + 0 = 2 and −2A + 3B = 6 =⇒ A = 2 and B = so 3 √ √ √ 10 3 y (t) = e−2t (2 cos 3t + sin 3t). The motion is under-damped. 3 y t Figure 83: Figure for Problem 9 10. We have P (r) = r2 + 5r + 6 = 0 =⇒ r = −2 or r = −3 =⇒ y (t) = Ae−2t + Be−3t and y (t) = −2Ae−2t − 3Be−3t . Since y (0) = −1 and y (0) = 4, we ﬁnd that A + B = −1 and −2A − 3B = 4 =⇒ A = 1 and B = −2. Therefore, y (t) = e−2t − 2e−3t . The motion is over-damped. y t Figure 84: Figure for Problem 10 11. At equilibrium, e−2t − 2e−3t = 0 =⇒ t = ln 2. For maximum displacement, y (t) = 0 =⇒ −2e−2t + 1 6e−3t = 0 =⇒ t = ln 3 so the maximum displacement is given by y (ln 3) = . 27 √ √ 12. We have r2 + 2αr + 1 = 0 =⇒ r = −α ± α2 − 1 =⇒ r = −α ± µ where µ = α2 − 1 so that y (t) = e−αt (c1 eµt + c2 e−µt ). i.) System is under-damped =⇒ α2 − 1 < 0 =⇒ |α| < 1, but α > 0 so 0 < α < 1. ii.) System is critically-damped =⇒ |α| = 1, but α > 0 so α = 1. iii.) System is over-damped =⇒ |α| > 1, but α > 0 so α > 1. Since y (t) = e−αt [(µ − α)c1 eµt − (µ + α)c2 e−µt ] c1 µ+α and y (0) = 0, we ﬁnd that (µ − α)c1 − (µ + α)c2 = 0 =⇒ = < 0, which means that the system c2 µ−α will pass through equilibrium. 13. (a) We have P (r) = r2 + 3r + 2 = 0 =⇒ r = −1 or r = −2 =⇒ y (t) = Ae−t + Be−2t and y (t) = −Ae−t − 2Be−2t . 510 Since y (0) = 1, we have A + B = 1, and since y (0) = −3, we have −A − 2B = −3. Therefore, A = −1 and B = 2, so that y (t) = −e−t + 2e−2t . (b) y (t) = 0 =⇒ −e−t + 2e−2t = 0 =⇒ t = ln 2. (c) y t Figure 85: Figure for Problem 13 14. (a) In the under-damped case, it has been shown in the text that the general solution is given by y (t) = cA0 − c t c c A0 e− 2m cos (µt − φ). Hence, y (t) = − e 2m cos (µt − φ) − µA0 e− 2m t sin (µt − φ). Setting y (t) = 0 2m c c cos (µt − φ) = −µ sin (µt − φ) ,or equivalently, tan (µt − φ) = − . Since the tangent we obtain 2m cµm function is periodic of period π , it follows that successive maxima (or minima) of y (t) will occur when 2π 4πm T= =√ . µ 4mk − c2 (b) If c2 4πm < 1 then T = √ ≈ 2π 4km 4mk − c2 15. We have P (r) = r2 + m . k c2 c c r+ = 0 =⇒ r = − (with multiplicity 2). Thus, 2 m 4m 2m y (t) = Aert + Btert and y (t) = Arert + Brtert + Bert . y0 c Since y (0) = A and y (0) = Ar + B , we ﬁnd that A = y0 and Ar + B = v0 . Therefore, B = v0 + . Thus 2m y0 c c c y (t) = y0 ert + v0 + tert , or y (t) = ert [y0 + t(v0 + y0 )]. Since ert is never zero and y0 + t(v0 + y0 ) 2m 2m 2m is linear in t, it follows that the product can have at most one zero. 16. Let r be the radius of the cylinder, ρ be the density of the cylinder, and ρ0 be the density of the ﬂuid. L ρ0 In equilibrium: ρπr2 Lg = ρ0 πr2 g =⇒ ρ = . 4 4 2 L d2 y d2 y 4g 2 dy 2 2 In motion: ρπr L 2 = ρπr Lg − πr g + y ρ0 =⇒ Lρ 2 = ρ0 gy =⇒ + y = 0. Hence, dt 4 dt dt2 L g g y (t) = c1 cos 2 t + c2 sin 2 t . Thus the motion is simple harmonic with circular frequency L L g L and period of motion T = π . ω0 = 2 L g 511 17. Let θ represent the angular displacement of the pendulum. From Equation (6.5.28), we have 0. Since L = 0.5, this becomes d2 θ + 2gθ = 0. Therefore, dt2 θ(t) = c1 cos ( Since θ(0) = 2gt) + c2 sin ( d2 θ g + θ= dt2 L 2gt). √ 1 dθ 1 1 , we have c1 = , and since (0) = 0, we have c2 = 0. Thus θ(t) = cos ( 2gt). 10 10 dt 10 d2 θ g 18. Let θ represent the angular displacement of the pendulum. From Equation (6.5.28), we have 2 + θ = dt L 0, which implies g g t + B sin t. θ(t) = A cos L L Since θ(0) = α, we have A = α, and since dθ (0) = β , we have B = dt θ(t) = α cos g t+ L L β sin g L β . Hence, g g t, L g α2 g + β 2 L t − φ), where the amplitude is A0 = , the phase, φ, is determined by L g α β L L , and the period is T = 2π . cos φ = and sin φ = A0 A0 g g or θ(t) = A0 cos ( L . The time required g for the pendulum to swing from its extreme position on one side to its extreme position on the other side is L g half the period, T /2. If T /2 = 1, then π = 1 and L = 2 , so that L ≈ 0.993 meters. g π 19. In Problem 18, it was shown that the period of the simple pendulum is T = 2π 2 1 0.9 . The number of ticks per second is ≈ 9.8 T π 60 9.8 ticks per minute is approximately ≈ 63. π 0.9 20. We have T = 2π L ≈ 2π g 9.8 . Thus the number of 0.9 21. Since each side is stretched 2a, it follows that the vertical component of each side is given by 2a cos (θ0 ) = 4 8a 2a = where θ0 is the angle the side of length 5a makes with the vertical. Consequently, in equilibrium, 5 5 8a 5mg 2k = mg =⇒ k = . Now when the mass is pulled down a small vertical distance, y (t), from 5 16a equilibrium the length 4a changes to 4a + y (t). Using the diagram below we see that the length of the 8 y2 4 hypotenuse is y 2 + 8ay + 25a2 = 5a 1 + y + ≈ 5a(1 + y + ...). 2 25 25a 25a Also, from the ﬁgure, cos θ = y + 4a y + 4a 4 4 1 16 4 9 ≈ (1 − y + ...) ≈ + y ( − ) + ... ≈ + y. 4 5a 25a 5 5a 125a 5 125a 5a(1 + y + ...) 25a 512 Thus, 4 y + ...) − 3a cos θ + mg 25a 4 9 4 y + ...) + mg = −2k (2a + y + ...)( + 5 5 125a 8 16 18 16ak = −2k a + ( + )y + ... + 5 25 125 5 98 = −2k y + ... 125 5mg 98 = −2 y + ... 16a 125 49mg 49mg y + ··· ≈ − y. =− 100a 100a F = −2k 5a(1 + Thus, 49g d2 y + y = 0, and hence, the period is T = 2π dt2 100a 20π 100a = 49g 7 a . g 3a y + 4a y2 + 8ay + 25a2 θ Figure 86: Figure for Problem 21 22. Since each side is stretched b − a, it follows that the vertical component of each side is given by (L − L0 ) cos θ0 where theta0 is the angle the side of length L makes with the vertical. Consequently, in equilibrium, 2k (L − L0 ) cos θ0 = mg, so that k= mg mgL = . 2(L − L0 ) cos θ0 2(L − L0 ) L2 − L2 0 Now when the mass is pulled down a small vertical distance, y (t), from equilibrium the length L2 − L2 0 changes to L2 − L2 + y (t). Using the diagram below we see that the length of the hypotenuse is 0 2 (y + 2y L2 − L2 0 1/2 + L) 2 = L 1 + 2y L L2 − L2 0 y2 +2 L 1/2 ≈L 1+y for small y . Also from the ﬁgure, cos θ ≈ 1 (y + L L2 − L2 ) 0 L2 − y L2 − L2 0 L2 + ... , L2 − L2 0 L2 513 and so cos θ ≈ cos θ0 + L2 0 y . Consequently, L3 F = −2k L(1 + L2 − L2 0 y ) − L0 cos θ + mg L2 ≈ −2k (L − L0 ) + = −2k L2 − L2 L2 0 0 y (cos θ0 + 3 y ) + mg 2 L L (L − L0 ) cos θ0 + y = −2ky L2 0 (L − L0 ) + L3 L2 0 (L − L0 ) + L3 2 L2 − L2 0 L L2 − L2 0 cos θ0 L + (L − L0 ) cos θ0 2ky 3 (L − L2 ) 0 L3 mgL 2y = − 3 (L − L0 )(L2 + L0 L + L2 ) 0 L 2(L − L0 ) L2 − L2 0 =− =− Hence, m mg (L2 + L0 L + L2 ) 0 L2 L2 − L2 0 y. g (L2 + L0 L + L2 ) d2 y mg (L2 + L0 L + L2 ) 2π 0 0 + where ω 2 = y = 0. Thus the period is T = . dt2 ω L2 L2 − L2 L2 L2 − L2 0 0 L0 y + L2 - L02 y2 + 2y L2 - L02 + L2 θ Figure 87: Figure for Problem 22 23. (a) The motion is under-damped. (b) We have P (r) = r2 + 2r + 5 = 0 =⇒ r = −1 ± 2i so yh (t) = e−t (c1 cos 2t + c2 sin 2t). Letting yp (t) = C cos 2t + D sin 2t, we obtain yp (t) = −2C sin 2t + 2D cos 2t and yp (t) = −4C cos 2t − 4D sin 2t. Substituting these results into the original equation results in the system: −4C + 4D + 5C = 0 and −4D − 4C + 5D = 17. The solution is C = −4 and D = 1; consequently, yp (t) = −4 cos 2t + sin 2t. The general solution is y (t) = e−t (c1 cos 2t + c2 sin 2t) − 4 cos 2t + sin 2t. Since y (0) = −2, we have c1 = 2, and since y (0) = 0 we have 2c2 − c1 + 2 = 0 =⇒ c2 = 0. Thus y (t) = 2e−t cos 2t − 4 cos 2t + sin 2t. Transient part: 2e−t cos 2t. Steady state part: −4 cos 2t + sin 2t. 24. Since the system is resonating, ω = ω0 . Thus r2 +ω0 2 = 0 =⇒ r = ±ω0 i =⇒ yc (t) = A cos ω0 t+B sin ω0 t. Let yp (t) = t(C cos ω0 t + D sin ω0 t) so that yp (t) = (ω0 Dt + C ) cos ω0 t + (D − ω0 Ct) sin ω0 t and yp (t) = 514 ω0 D cos ω0 t − (ω0 2 D + Cω0 ) sin ω0 t − ω0 C sin ω0 t + (Dω0 − ω0 2 Ct) cos ω0 t. Substituting these results into F0 F0 y + ω0 y = F0 sin ω0 t and equating like terms we obtain D = 0 and C = − so yp (t) = − t cos ω0 t. 2ω0 2ω 0 F0 From this last equation we have y (t) = A cos ω0 t + B sin ω0 t − t cos ω0 t. Since y (0) = 0, we have A = 0, 2ω0 F0 F0 F0 F0 and since y (0) = 0, we have ω0 B − . Thus, y (t) = = 0 =⇒ B = sin ω0 − t cos ω0 t. 2ω 0 2ω0 2 2ω 0 2ω0 25. Let yp (t) = A sin t + B cos t =⇒ yp (t) = A cos t − B sin t =⇒ yp (t) = −A sin t − B cos t. Substituting these results into y (t) + 3y (t) + 2y (t) = 10 sin t, we obtain the system A − 3B = 10 and 3A + B = 0 so √ √ A = 1 and B = −3. Hence, yp (t) = sin t − 3 cos t = A2 + B 2 sin [t + tan−1 (B/A)] = 10 sin (t − tan−1 3). 26. Case 1: ω = ω0 . d2 y F0 = −Aω 2 cos ωt − Bω 2 sin ωt. From + ω0 2 y = cos ωt, we obdt2 m F0 F0 tain (−ω 2 + ω0 2 )(A cos ωt + B sin ωt) = cos ωt =⇒ A cos ωt + B sin ωt = cos ωt =⇒ A = m m(ω0 2 − ω0 ) F0 F0 and B = 0. Hence, yp (t) = cos ωt. 2 − ω2 ) m(ω0 m(ω0 2 − ω 2 ) Let yp = A cos ωt + B sin ωt so yp Case 2: ω0 = ω . Let yp (t) = t(A cos ω0 t + B sin ω0 t) so that yp (t) = A cos ω0 t + B sin ω0 t + t(−Aω0 sin ω0 t + Bω0 cos ω0 t), and yp (t) = 2(−Aω0 sin ωo t + Bω0 cos ω0 t) − t(Aω0 2 cos ω0 t + Bω0 2 sin ω0 t) = sin ω0 t(−2Aω0 − Btω0 2 ) + cos ω0 t(2Bω0 − tAω0 2 ). Hence, yp + ω0 y = sin ω0 t(−2Aω0 − Btω0 2 + Btω0 2 ) + cos ω0 t(2Bω0 − Atω0 2 + Atω0 2 ) F0 = sin ω0 t(−2Aω0 ) + cos ω0 t(2Bω0 ) = cos ω0 t. m Hence, A = 0 and B = 27. If F0 F0 . Thus, yp (t) = t sin ω0 t. 2mω0 2mω0 m ω = , where m and n are integers, then from the given equation ω0 n y t+ 2πn ω0 2πn F0 cos ω t + m(ω0 2 − ω 2 ) ω0 F0 2πωn = A0 cos (ω0 t + 2πn − φ) + cos ω t + m(ω0 2 − ω 2 ) ω0 F0 = A0 cos (ω0 t − φ) + cos (ωt + 2πm) m(ω0 2 − ω 2 ) F0 = A0 cos (ω0 t − φ) + cos ωt m(ω0 2 − ω 2 ) = y (t). = A0 cos ω0 t + Thus the motion is periodic with period T = 2πn ω0 2πn . ω0 −φ + 515 3 28. From Problem 27, we have yc (t) = A0 cos ( t − φ). Let yp (t) = A cos 2t + B sin 2t. Taking the ﬁrst and 4 second derivatives of yp and substituting the results into the diﬀerential equation gives −4(A cos 2t + B sin 2t) + 9 (A cos 2t + B sin 2t) = 55 cos 2t. 16 Equating like terms and solving for the constants we obtain A = −16 and B = 0. Consequently, y (t) = 3 ω 8 A0 cos ( t − φ) − 16 cos 2t. Using the result of Problem 27, = implies that the motion is periodic with 4 ω0 3 2π (3) period T = = 8π . 3/4 29. Let yp = A cos ωt + B sin ωt so that yp = −ωA sin ωt + Bω cos ωt and yp = −ω 2 A cos ωt − Bω 2 sin ωt. Substituting these results into the original equation yields −ω 2 A cos ωt − Bω 2 sin ωt + k F0 c (−ωA sin ωt + Bω cos ωt) + (A cos ωt + B sin ωt) = cos ωt, m m m which implies that cos ωt(−ω 2 A + cω k cω k F0 B + A) + sin ωt(−ω 2 B − A + B) = cos ωt. m m m m m Therefore, cω k cω k F0 A + B = 0 and − ω2 A + B+ A= . m m m m m mω F0 k B= . Solving this system for A and B yields Thus, ( − ω 2 )A + m c m −ω 2 B − B= F0 cω (k − mω 2 )2 + c2 ω 2 and A= (k − mω 2 )F0 . (k − mω 2 )2 + c2 ω 2 F0 [(k − mω 2 ) cos ωt + cω sin ωt]. (k − mω 2 )2 + c2 ω 2 √ m2 (ω0 2 − ω 2 )2 + c2 ω 2 = m2 ω0 4 − 2m2 ω0 2 ω 2 + m2 ω 4 + c2 ω 2 , we set Thus, the particular solution is yp (t) = 30. To minimize H = 0= dH 1 = [m2 ω 4 + ω 2 (c2 − 2m2 ω0 2 ) + m2 ω0 4 ]−1/2 [4m2 ω 3 + 2ω (c2 − 2m2 ω0 2 )]. dω 2 Therefore, 4m2 ω 3 + 2ω (c2 − 2m2 ω0 2 ) = 0, so that ω = 0 or 2m2 ω 2 = 2m2 ω0 2 − c2 . Since ω = 0, the latter c2 c2 case holds, and implies that ω = ± ω0 2 − . When ω = ω0 2 − the amplitude of the steady-state 2m2 2m2 solution is a maximum. 31. d2 y k F (t) c dy + y= , we substitute the given information: y (t) + 2y (t) + 5y (t) = 8 cos ωt. + dt2 m dt m m 2 Then r + 2r + 5 = 0 =⇒ r = −1 ± 2i so let yc (t) = e−t (c1 cos 2t + c2 sin 2t) and yp (t) = A cos ωt + B sin ωt. Thus we obtain the system −ω 2 B − 2ωA + 5B = 0 and −ω 2 A + 2ωB + 5A = 8. Solving this system yields: (a) From A= 40 − 8ω 2 ω 4 − 6ω 2 + 25 and B = ω4 16ω . − 6ω 2 + 25 516 Transient solution: yT (t) = e−t (c1 cos 2t + c2 sin 2t). Steady state solution: yS (t) = 40 − 8ω 2 16ω cos ωt + 4 sin ωt. ω 4 − 6ω 2 + 25 ω − 6ω 2 + 25 √ √ c2 = 5 − 2 = 3 maximizes the amplitude of 2 2m √ the steady-state solution (see Problem 30). Therefore, using (a), we have A = 1 and B = 3, so that √ √ √ yp (t) = cos ( 3t) + 3 sin ( 3t) √ √ √ 1 3 =2 cos ( 3t) + sin ( 3t) 2 2 √ = 2 cos ( 3t − π/3). (b) Since m = 1, k = 5, and c = 2, we have ω = ω0 2 − 32. (a) Since F (t) = 4e−t cos 2t, as t −→ ∞, F (t) exhibits oscillatory behavior with diminishing amplitude. In fact, F (t) −→ 0. ( b) The diﬀerential equation is y (t) + 2y (t) + 5y (t) = 4e−t cos 2t. Then P (r) = r2 + 2r + 5 = 0 =⇒ r = −1 ± 2i, so we have yc (t) = e−t (A cos 2t + B sin 2t) and yp (t) = te−t (C cos 2t + D sin 2t). Now from yp (t) + 2yp (t) + 5yp (t) = 4e−t cos 2t, we obtain e−t (4D cos 2t − 4C sin 2t) + te−t (0) = 4e−t cos 2t which implies that C = 0 and D = 1. Thus, yp (t) = te−t sin 2t and y (t) = e−t (A cos 2t + B sin 2t)+ te−t sin 2t. As t −→ ∞, y (t) −→ 0. d2 y + 16y = 0 then r2 + 16 = 0 =⇒ r = ±4i =⇒ yc (t) = A0 cos (4t − φ). Now let a particular solution, dt2 yp (t), take the form yp (t) = e−t (A sin t + B cos t). Substituting this expression into the diﬀerential equation, we obtain 2e−t (B sin t − A cos t) + 16e−t (A sin t + B cos t) = 130e−t cos t, 33. If which implies that (2B + 16A) sin t + (16B − 2A) cos t = 130 cos t. Thus, B + 8A = 0 and 8B − A = 65. Hence, A = −1 and B = 8. Consequently, yp (t) = e−t (8 cos t − sin t). Transient part: e−t (8 cos t − sin t). Steady-state part: A0 cos (4t − φ). Solutions to Section 6.6 True-False Review: 1. TRUE. The diﬀerential equation governing the situation in which no driving electromotive force is present is (6.6.3), and its solutions are given in (6.6.4). In all cases, q (t) → 0 as t → ∞. Since the charge decays to zero, the rate of change of charge, which is the current in the circuit, eventually decreases to zero as well. 2. TRUE. For the given constants, it is true that R2 < 4L/C , since R2 = 16 and 4L/C = 272. 517 3. TRUE. The amplitude of the steady-state current is given by Equation (6.6.6), which is maximum when 1 ω 2 = ω . Substituting ω 2 = LC , it follows that the amplitude of the steady-state current will be a maximum when 1 ω = ωmax = √ . LC 4. TRUE. From the form of the amplitude of the steady-state current given in Equation (6.6.6), we see that the amplitude A is directly proportional to the amplitude E0 of the external driving force. 5. FALSE. The current i(t) in the circuit is the derivative of the charge q (t) on the capacitor, given in the solution to Example 6.6.1: q (t) = A0 e−Rt/2L cos(µt − φ) + E0 cos(ωt − η ). H The derivative of this is not inversely proportional to R. 6. FALSE. The charge on the capacitor decays over time if there is no driving force. Problems: √ 1 2 1 1 3 , L = , C = , E0 = 13, and ω = 3. So ω0 = = = 3, 2 2 3 LC (1/2)(2/3) √ √ 1 9 3 13 ωE0 3 · 13 = 2 13, cos η = H = L2 (ω0 2 − ω 2 ) + R2 ω 2 = (3 − 9)2 + (9) = ,A= =√ 4 4 2 H 3 13/2 2 Rω 3 L(ω0 2 − ω 2 ) = − √ , and sin η = = √ . Thus H H 13 13 1. We are given that R = iS (t) = −A sin (ωt − η ) √ = −2 13 sin (3t − η ) √ = −2 13(sin 3t cos η − cos 3t sin η ) √ 2 = −2 13 sin 3t − √ − cos 3t 13 = 2(2 sin 3t + 3 cos 3t). 3 √ 13 4L − R2 1 C 2. Given the RLC circuit with L = 4, C = , E = E0 and R = 4, we obtain µ = == 2. Thus 17 2L d2 q dq 17 E0 qc (t) = e−t/2 (c1 cos µt + c2 sin µt). Since the diﬀerential equation takes the form 2 + + q = , we can dt dt 4 4 let qp (t) = K so that qp = qp = 0. Substituting these results into the diﬀerential equation and solving for K E0 E0 E0 ; hence, qp (t) = E0 /17 and q (t) = e−t/2 (c1 cos µt + c2 sin µt) + . As t −→ ∞, q (t) −→ . yields K = 17 17 17 Since qc tends to zero as t tends to inﬁnity, for large t that particular solution, qp (t), will be the dominant dqp part of q (t). The steady-state current is given by iS (t) = = 0. dt 3. When R = 0, the diﬀerential equation is where ω0 = 1 . LC d2 q 1 + q = E0 cos ωt, and so qc (t) = c1 cos ω0 t + c2 sin ω0 t, dt2 LC 518 1 , then let qp (t) = t(A cos ωt + B sin ωt). Hence, LC Case 1: If ω = qp + ω 2 qp = E0 cos ωt, and so 2ωB cos ωt − 2ωA sin ωt = E0 cos ωt. tE0 E0 , and therefore qp (t) = sin ωt so that Therefore, A = 0 and B = 2ω 2ω q (t) = c1 cos ω0 t + c2 sin ω0 t + tE0 sin ωt. 2ω 1 . LC Thus, as t −→ ∞, q (t) −→ ∞ when ω = 1 , then we take qp (t) = A cos ωt so that LC Case 2: If ω = qp + 1 qp = E0 cos ωt, LC which implies that −ω 2 A cos ωt + Therefore, A = A cos ωt = E0 cos ωt =⇒ LC A − ω 2 A cos ωt = E0 cos ωt. LC E0 LC E0 LC , and so qp (t) = cos ωt. Hence, 1 − LCω 2 1 − LCω 2 q (t) = c1 cos ω0 t + c2 sin ω0 t + E0 LC cos ωt, 1 − LCω 2 1 dq tE0 E0 , then i(t) = = (ωc2 + ) cos ωt + ( − ωc1 ) sin ωt which LC dt 2 2ω 1 implies that i(t) −→ ∞ as t −→ ∞. If ω = , then LC which is a bounded expression. If ω = i(t) = dq CLωE0 = sin ωt + ω0 c2 cos ωt − ω0 c1 sin ωt, dt CLω 2 − L which is bounded. 4. Given the RLC circuit with R = 16, L = 8, C= 1 , 40 E (t) = 17 cos 2t, E0 = 17, and ω = 2, 4L − R2 √ 1 1 C we have µ = = 2, ω0 = = = 5, and H = L2 (ω0 2 − ω 2 ) + R2 ω 2 = 2L LC 8(1/40) √ L(ω0 2 − ω 2 ) 1 Rω 4 8(5 − 22 )2 + 162 · 22 = 8 17. Then cos η = = √ and sin η = = √ so that H H 17 17 Rt q (t) = e− 2L (c1 cos µt + c2 sin µt)+ E0 1 (cos ωt cos η +sin ωt sin η ) = e−t (c1 cos 2t + c2 sin 2t)+ (cos 2t +4 sin 2t). H 8 519 1 Now q (0) = 0 =⇒ c1 = − . Thus, 8 i(t) = dq 1 = e−t [(2c2 − c1 ) cos 2t + (−2c1 − c2 ) sin 2t] + cos 2t − sin 2t. dt 4 Since i(0) = 0, we have c2 = − 9 . Thus, 16 i(t) = e−t (− cos 2t + 5. Given the RLC circuit with R = 3, L= 1 13 sin 2t) + cos 2t − sin 2t. 16 4 1 , 2 C= 1 , 5 E (t) = 2 cos ωt, and E0 = 2. So µ = 4L − R2 √ 1 C = 1, ω0 = = 10, and H = L2 (ω0 2 − ω 2 ) + R2 ω 2 = (1/2)2 (10 − ω 2 )2 + 32 ω 2 = 2L LC (10 − ω 2 )2 + 36ω 2 L(ω0 2 − ω 2 ) 10 − ω 2 Rω 3ω . Then cos η = = and sin η = = so that 2 H 2H H H Rt q (t) = A0 e− 2L cos (µt − φ) + E0 cos (ωt − η ), H where c1 = A0 cos φ and c2 = A0 sin φ. Therefore, q (t) = A0 e−3t cos (t − φ) + and i(t) = Thus ωmax = 2 cos (ωt − η ) H 2ω dq = −A0 e−3t [3 cos (t − φ) + sin (t − φ)] − sin (ωt − η ). dt H √ 1 = 10. LC d2 q R dq 1 1 d3 q R d2 q 1 dq + + q = E (t) with respect to t yields 3 + + = 2 2 dt L dt LC L dt L dt LC dt 2 1 dE dq d i R di 1 1 dE , but i(t) = so it follows that 2 + + i= . L dt dt dt L dt LC L dt 6. Diﬀerentiating the equation dE 7. Since E (t) = E0 e−at , = −aE0 e−at . The equation for current is identical to the equation governing dt charge, except it has a diﬀerent nonhomogeneous term. Rt ic (t) = A0 e− 2L cos (µt − φ), and substituting ip (t) = Ae−at into a2 Ae−at + d2 i R di 1 1 dE + + i= , we have dt2 L dt LC L dt R 1 1 (−aAe−at ) + (Ae−at ) = (−aE0 e−at ). L LC L Therefore, Ae−at (a2 − aR 1 aE0 −at + )=− e, L LC L 520 and so A = − a2 LC ip (t) = − where µ = aCE0 . Hence, − aCR + 1 Rt aCE0 aCE0 e−at and i(t) = A0 e− 2L cos (µt − φ) − 2 e−at , a2 LC − aCR + 1 a LC − aCR + 1 4L − R2 C . 2L 4L − R2 50t, 0 ≤ t < π, C , we have µ = = 1. Thus 50π, t ≥ π, 2L d2 q R dq 1 E (t) Rt qc (t) = e− 2L (c1 cos (µt) + c2 sin (µt)) = e−2t (c1 cos t + c2 sin t). From (6.6.2), 2 + + q= , and dt L dt LC L 2 dq dq on the interval 0 < t < π this becomes 2 +4 +5q = 100t. Using the method of undetermined coeﬃcients, dt dt if qp (t) = At + B then from the diﬀerential equation we obtain A = 20 and B = −16 so qp (t) = 20t − 16 which dq d2 q in turn yields ip (t) = 20. On the interval t ≥ π the diﬀerential equation becomes 2 + 4 + 5q = 100π . dt dt Again using the method of undetermined coeﬃcients the time with qp (t) = K we obtain after substitution and equating like terms that K = 20π so qp (t) = 20π and ip (t) = 0. dq (0) = i(0) = 0 =⇒ On [0, π ), q (t) = e−2t (c1 cos t + c2 sin t) + 20t − 16. Given q (0) = 0 =⇒ c1 = 16 and dt −2t c2 = 12 and so q (t) = e (16 cos t + 12 sin t) + 20t − 16. Therefore, 1 2 8. Given R = 2, L = , C = and E (t) = 2 5 lim q (t) = 20π − 16(e−2π + 1). t→π − Further, i(t) = q (t) = −e−2t (20 cos t + 40 sin t) + 20, so that lim i(t) = 20(e−2π + 1). t→π − On [π, ∞), q (t) = e−2t (c3 cos t + c4 sin t) + 20π . Continuity of the solution at t = π requires that q (π ) = lim− q (t), t→π so that 20π − c3 e−2π = 20π − 16(e−2π + 1). dq Consequently, c3 = 16(e2π + 1). Now i(t) = = i(t) = e−2t [(c4 − 2c3 ) cos t − (c3 + 2c4 ) sin t], and continuity dt at t = π requires i(π ) = lim− i(t). t→π Therefore 20(e−2π + 1) = −e−2π (c4 − 2c3 ) so that c4 = 12(e2π + 1). Consequently, i(t) = e−2t [−20(e2π + 1) cos t − 40(e2π + 1) sin t] = −20e−2t (e−2π + 1)(cos t + 2 sin t) for t ≥ π . Thus 20[1 − e−2t (cos t + 2 sin t)], 0 ≤ t < π, i(t) = −20e−2t (e−2π + 1)(cos t + 2 sin t), t ≥ π. 521 Solutions to Section 6.7 True-False Review: 1. TRUE. This is essentially the statement of the variation-of-parameters method (Theorem 6.7.1). 2. FALSE. The solutions y1 , y2 , . . . , yn must form a linearly independent set of solutions to the associated homogeneous diﬀerential equation (6.7.18). 3. FALSE. The requirement on the functions u1 , u2 , . . . , un , according to Theorem 6.7.6, is that they satisfy Equations (6.7.22). Because these equations involve the derivatives of u1 , u2 , . . . , un only, constants of integration can be arbitrarily chosen in solving (6.7.22), and therefore, addition of constants to the functions u1 , u2 , . . . , un again yields valid solutions to (6.7.22). Problems: 1. Setting y +6y +9y = 0 =⇒ r2 +6r +9 = 0 =⇒ r ∈ {−3, −3} =⇒ yc (x) = e−3x (c1 +c2 x). Let y1 (x) = e−3x and y2 (x) = xe−3x . We have W [y1 , y2 ](x) = e−6x . Then a particular solution to the given diﬀerential 2e−3x . equation is yp (x) = u1 y1 + u2 y2 , where e−3x u1 + xe−3x u2 = 0 and −3e−3x u1 + e−3x (1 − 3x)u2 = 2 x +1 2 That is, u1 + xu2 = 0, −3u1 + (1 − 3x)u2 = 2 . Solving for u1 and u2 (and setting integration constants x +1 to zero), we have x x 2te−3t (e−3t ) 2t u1 = − dt = − dt = − ln(x2 + 1) e−6t (t2 + 1) t2 + 1 and x u2 = x 2e−3t (e−3t ) dt = e−6t (t2 + 1) t2 2 dt = 2 tan−1 (x). +1 Thus yp (x) = −e−3x ln (x2 + 1) + 2xe−3x tan−1 (x). Therefore, y (x) = e−3x [c1 + c2 x + 2x tan−1 (x) − ln (x2 + 1)]. 2. Setting y − 4y = 0 =⇒ r2 − 4 = 0 =⇒ r ∈ {2, −2} =⇒ yc (x) = c1 e2x + c2 e−2x . Let y1 (x) = e2x and y2 (x) = e−2x . We have W [y1 , y2 ](x) = −4. Then a particular solution to the given diﬀerential equation is 8 yp (x) = u1 y1 + u2 y2 , where e2x u1 + e−2x u2 = 0, 2e2x u1 − 2e−2x u2 = 2x . Solving for u1 and u2 (and e +1 setting integration constants to zero), we have x u1 = − (e2t 8e−2t dt = + 1)W [y1 , y2 ] and x u2 = (e2t x 2e−2t dt = ln(e2x + 1) − 2x − e−2x e2t + 1 8e2t dt = − ln(e2x + 1), + 1)W [y1 , y2 ] where we have used a calculator to evaluate the integral required for u1 . Thus, yp (x) = ln(e2x + 1) − 2x − e−2x e2x − ln(e2x + 1)e−2x and the general solution is y (x) = c1 e2x + c2 e−2x + ln(e2x + 1) − 2x − e−2x e2x − ln(e2x + 1)e−2x . 522 3. Setting y − 4y + 5y = 0 =⇒ r2 − 4r + 5 = 0 =⇒ r ∈ {2 + i, 2 − i} =⇒ yc (x) = e2x (c1 cos x + c2 sin x). Let y1 (x) = e2x cos x and 2 (x) = e2x sin x. Then W [y1 , y2 ](x) = e4x . A particular solution to the given diﬀerential equation is therefore yp = u1 y1 + u2 y2 , where e2x cos xu1 + e2x sin xu2 = 0 and e2x (2 cos x − sin x)u1 + e2x (2 sin x + cos x)u2 = e2x tan x. Solving for u1 and u2 (and setting integration constants to zero), we have x u1 = − e2t sin t(e2t tan t) dt = e4t and x u2 = x sin t tan tdt = sin x − ln | sec x + tan x| x e2t cos t(e2t tan t) dt = e4t sin tdt = − cos x. Consequently, yp (x) = e2x cos x(sin x−ln | sec x + tan x|)−e2x sin x cos x = −e2x cos x ln | sec x + tan x|. Thus, y (x) = e2x [cos x(c1 − ln | sec x + tan x|) + c2 sin x]. 4. Given y − 6y + 9y = 4e3x ln x =⇒ r2 − 6r + 9 = (r − 3)2 = 0 =⇒ r ∈ {3, 3} =⇒ yc (x) = c1 e3x + c2 xe3x . Let y1 (x) = e3x and y2 (x) = xe3x . We have W [y1 , y2 ](x) = e6x . Then a particular solution to the given diﬀerential equation is yp (x) = y1 u1 + y2 u2 , where e3x u1 + xe3x u2 = 0, 3e3x u1 + e3x (3x + 1)u2 = 4e3x ln x. Solving for u1 and u2 (and setting integration constants to zero), we have x u1 = − and x u2 = x (te3t )(4e3t ln t) dt = − e6t (e3t )(4e3t ln t) dt = e6t 4t ln tdt = x2 (1 − 2 ln x) x 4 ln tdt = 4x(ln x − 1). Consequently, yp (x) = x2 e3x (1 − 2 ln x) + 4x2 e3x (ln x − 1) = x2 e3x (2 ln x − 3), so that y (x) = c1 e3x + c2 xe3x + x2 e3x (2 ln x − 3). 5. Setting y + 4y + 4y = 0 =⇒ r2 + 4r + 4 = (r + 2)2 = 0 =⇒ r ∈ {−2, −2} =⇒ yc (x) = c1 e−2x + c2 xe−2x . Let y1 (x) = e−2x and y2 (x) = xe−2x . We have W [y1 , y2 ](x) = e−4x . Then a particular solution to the given diﬀerential equation is yp (x) = y1 u1 + y2 u2 , where u1 + xu2 = 0, −2u1 + (1 − 2x)u2 = 1 . x2 Solving for u1 and u2 (and setting integration constants to zero), we have x u1 = − (te−2t )e−2t dt = − t2 e−4t x 1 dt = − ln x t 523 and x u2 = 1 (e−2t )e−2t dt = − . 2 e−4t t x Consequently, yp (x) = −e−2x ln x − e−2x . Hence, y (x) = e−2x (c1 + c2 x − ln x − 1). 6. Setting y + 9y = 0 =⇒ r2 + 9 = 0 =⇒ r ∈ {3i, −3i} =⇒ yc (x) = c1 cos 3x + c2 sin 3x. Let y1 (x) = cos 3x and y2 (x) = sin 3x. Then, W [y1 , y2 ](x) = (cos 3x)(3 cos 3x) − (sin 3x)(−3 sin 3x) = 3 = 0 so that yp (x) = y1 u1 + y2 u2 , where x u1 = − (sin 3t)(18 sec3 3t) dt = −6 3 and x u2 = x (cos 3t)(18 sec3 3t) dt = 6 3 (sin 3t)(sec3 3t)dt = − tan2 3x x sec2 3tdt = 2 tan 3x. Consequently, yp (x) = − cos 3x tan2 3x + 2 sin 3x tan 3x. Hence, y (x) = c1 cos 3x + c2 sin 3x − cos 3x tan2 3x + 2 sin 3x tan 3x. 7. Setting y − y = 0 =⇒ r2 − 1 = 0 =⇒ r ∈ {−1, 1} =⇒ yc (x) = c1 ex + c2 e−x . Let y1 (x) = ex and y2 (x) = e−x . Then, W [y1 , y2 ](x) = (ex )(−e−x ) − (ex )(e−x ) = −2 = 0 so that yp (x) y1 F y2 F dx + y2 dx W W −2x 1−e e2x − 1 = ex dx − e−x dx ex + e−x ex + e−x −1 x x −x −x x = e [2 tan (e ) + e ] − e [e − 2 tan−1 (ex )] = 2(ex + e−x ) tan−1 (ex ). = −y1 Hence, y (x) = c1 ex + c2 e−x + 4 cosh x tan−1 (ex ). 8. Setting y − 2my + m2 y = 0 =⇒ r2 − 2mr + m2 = 0 =⇒ r ∈ {m, m} =⇒ yc (x) = emx (c1 + c2 x). Let y1 (x) = emx and y2 (x) = xemx . Then, W [y1 , y2 ](x) = (emx )(emx [mx + 1]) − (memx )(xemx ) = e2mx = 0 so that yp (x) = −y1 = −emx =− y2 F y1 F dx + y2 dx W W x 1 dx + xemx dx 1 + x2 1 + x2 emx ln (1 + x2 ) + xemx tan−1 (x). 2 524 √ Thus, y (x) = emx (c1 + c2 x + x tan−1 (x) − ln 1 + x2 ). 9. Setting y − 2y + y = 0 =⇒ r2 − 2r + 1 = 0 =⇒ r ∈ {1, 1} =⇒ yc (x) = c1 ex + c2 xex . Let y1 (x) = ex and y2 (x) = xex . Then, W [y1 , y2 ](x) = (ex )(ex [x + 1]) − (ex )(xex ) = e2x = 0 so that yp (x) = −y1 = −4ex y2 F dx + y2 W y1 F dx W x−2 ln xdx + 4xex x−3 ln xdx ex (2 ln x + 3). x Consequently, y (x) = ex [c1 + c2 x + x−1 (2 ln x + 3)]. = 10. Setting y + 2y + y = 0 =⇒ r2 + 2r + 1 = 0 =⇒ r ∈ {−1, −1} =⇒ yc (x) = e−x (c1 + c2 x). Let y1 (x) = e−x and y2 (x) = xe−x . Then, W [y1 , y2 ](x) = (e−x )(e−x [1 − x]) + (xe−x )(−e−x ) = e−2x = 0 so that y2 F y1 F dx + y2 dx W W x 1 √ √ = −e−x dx + xe−x dx 2 4 − x2 √4−x −1 x −x −x = −e (− 4 − x2 ) + xe sin 2. √ Consequently, y (x) = e−x (c1 + c2 x + x sin−1 x + 4 − x2 ). 2 yp (x) = −y1 11. Setting y + 2y + 17 = 0 =⇒ r2 +2r +17 = 0 =⇒ r ∈ {1+4i, 1 − 4i} =⇒ yc (x) = e−x (c1 cos 4x + c2 sin 4x). Let y1 (x) = e−x cos 4x and y2 (x) = e−x sin 4x. Then W [y1 , y2 ](x) = (e−x cos 4x)(e−x [4 cos 4x − sin 4x]) − (e−x sin 4x)[−e−x (4 sin 4x + cos 4x)] = 4e−2x so that y1 F dx W sin 4x cos 4x = −16e−x cos 4x dx + 16e−x sin 4x dx 3 + sin2 4x 3 + sin2 √x 4 √ 1 3 = −16e−x cos 4x − tanh−1 (cos (4x/2)) + 16e−x sin 4x tan−1 (sin (4x/ 3)) 8 12 √ √ 4 3 −x = 2e−x cos 4x tanh−1 (cos (4x/2)) + e sin 4x tan−1 (sin (4x/ 3)) 3 √ √ cos 4x + 2 43 Therefore, y (x) = e−x c1 cos 4x + c2 sin 4x + cos 4x ln + sin 4x tan−1 (sin 4x/ 3) . cos 4x − 2 3 yp (x) = −y1 y2 F dx + y2 W 12. Setting y + 9y = 0 =⇒ r2 + 9 = 0 =⇒ r ∈ {−3i, 3i} =⇒ yc (x) = c1 cos 3x + c2 sin 3x. Let y1 (x) = cos 3x and y2 (x) = sin 3x. Then, W [y1 , y2 ](x) = (cos 3x)(3 cos 3x) − (sin 3x)(−3 cos 3x) = 3 = 0 525 so that y2 F y1 F dx + y2 dx W W sin 3x cos 3x = −12 cos 3x dx + 12 sin 3x dx 4 − cos2 3x 4 − cos2 3x √ √ 3 1 tan−1 (sin 3x/ 3) . = −12 cos 3x − tanh−1 (cos 3x/2) + 12 sin 3x 6 9 √ √ 43 −1 Thus, y (x) = [c1 + 2 tanh (cos 3x/2)] cos 3x + [c2 + tan−1 (sin 3x/ 3)] sin 3x. 3 yp (x) = −y1 13. Setting y − 10y + 25y = 0 =⇒ r2 − 10r + 25 = 0 =⇒ r ∈ {5, 5} =⇒ yc (x) = e5x (c1 + c2 x). Let y1 (x) = e5x and y2 (x) = xe5x . Then, W [y1 , y2 ](x) = (e5x )[e5x (5x + 1)] − (xe5x )(5e5x ) = e10x = 0 so that yp (x) y1 F y2 F dx + y2 dx W W x 1 = −2e5x dx + 2xe5x dx 4 + x2 4 + x2 −1 5x 2 5x = −e ln (4 + x ) + xe tan (x/2) = −y1 Hence, y (x) = e5x [c1 + c2 x − ln (4 + x2 ) + x tan−1 (x/2)]. 14. Setting y −6y +13y = 0 =⇒ r2 −6r +13 = 0 =⇒ r ∈ {3+2i, 3−2i} =⇒ yc (x) = e3x (c1 cos 2x+c2 sin 2x). Let y1 (x) = e3x cos 2x and y2 (x) = e3x sin 2x. Then, W [y1 , y2 ](x) = (e3x cos 2x)(e3x [2 cos 2x + 3 sin 2x]) − (e3x sin 2x)(e3x [3 cos 2x − 2 sin 2x]) = 2e6x so that yp (x) = −y1 y2 F dx + y2 W = −2e3x cos 2x y1 F dx W sin 2x sec2 2xdx + 2e3x sin 2x sec 2xdx 3x = e (sin 2x ln | sec 2x + tan 2x| − 1). Hence, y (x) = e3x (c1 cos 2x + c2 sin 2x + sin 2x ln | sec 2x + tan 2x| − 1). 15. The complementary function for the diﬀerential equation is yc (x) = c1 cos x + c2 sin x. To determine a particular solution to the given diﬀerential equation we ﬁrst ﬁnd particular solutions to each of the diﬀerential equations y + y = sec x, and y + y = 4ex , and then add the two solutions together. Using variation-ofparameters, a particular solution to y + y = sec x is yp1 (x) = u1 cos x + u2 sin x, where cos xu1 + sin xu2 = 0, − sin xu1 + cos xu2 = sec x. This pair of equations has a solution u1 = − tan x and u2 = 1. Consequently, we can choose u1 (x) = ln (cos x) and u2 (x) = x, so that yp1 (x) = cos x ln (cos x) + x sin x. 526 From the method of undetermined coeﬃcients, the diﬀerential equation y + y = 4ex has a particular solution of the form yp2 (x) = 2ex . A particular solution to the given diﬀerential equation is therefore yp (x) = yp1 (x) + yp2 (x) = cos x ln (cos x) + x sin x + 2ex , and the general solution is y (x) = c1 cos x + c2 sin x + cos x ln (cos x) + x sin x + 2ex . 16. The complementary function for the diﬀerential equation is yc (x) = c1 cos x + c2 sin x. To determine a particular solution to the given diﬀerential equation we ﬁrst ﬁnd particular solutions to the diﬀerential equations y + y = csc x and y + y = 2x2 + 5x + 1, and then add the two solutions together. Using variation-of-parameters, a particular solution to y + y = csc x is yp1 (x) = u1 cos x + c2 sin x, where cos xu1 + sin xu2 = 0, − sin xu1 + cos xu2 = csc x. This pair of equations has a solution u1 = −1 and u2 = cot x. Consequently, we can choose u1 (x) = −x and u2 (x) = ln (sin x), so that yp1 (x) = −x cos x + ln (sin x) sin x. From the method of undetermined coeﬃcients, the diﬀerential equation y + y = 2x2 +5x +1 has a particular solution of the form yp2 (x) = A0 + A1 x + A2 x2 . Substituting into the preceding diﬀerential equation and equating coeﬃcients yields A0 = −3, A1 = 5 and A2 = 2. Hence, yp2 (x) = −3 + 5x + 2x2 . A particular solution to the given diﬀerential equation is therefore yp (x) = yp1 (x) + yp2 (x) = −x cos x + ln (sin x) sin x − 3 + 5x + 2x2 and the general solution to the diﬀerential equation is y (x) = c1 cos x + c2 sin x − x cos x + ln (sin x) sin x − 3 + 5x + 2x2 . 17. The complementary function for the diﬀerential equation is yc (x) = c1 e−2x + c2 xe−2x . To determine a particular solution to the given diﬀerential equation, we ﬁrst ﬁnd particular solutions to each of the diﬀerential equations y + 4y + 4y = 15e−2x ln x and y + 4y + 4y = 25 cos x separately, and then we add the two solutions together. Using variation-of-parameters, a particular solution to y + 4y + 4y = 15e−2x ln x is yp1 (x) = u1 e−2x + u2 xe−2x , where e−2x u1 + xe−2x u2 = 0, −2e−2x u1 + e−2x (1 − 2x)u2 = 15e−2x ln x. This pair of equations has a solution u1 = −15x ln x, Consequently, we can choose u1 (x) = − yp1 (x) = − u2 = 15 ln x. 15 2 x (2 ln x − 1) and u2 (x) = 15x(ln x − 1) so that 4 15 2 −2x 15 2 −2x xe (2 ln x − 1) + 15x2 e−2x (ln x − 1) = xe (2 ln x − 3). 4 4 527 According to the method of undetermined coeﬃcients, the diﬀerential equation y + 4y + 4y = 25 cos x has a particular solution of the form yp2 (x) = A0 cos x + A1 sin x. Substituting into the preceding diﬀerential equation and equating the coeﬃcients yields 3A0 + 4A1 = 25 and −4A0 + 3A1 = 0, with a solution A0 = 3, A1 = 4. Consequently, yp2 (x) = 3 cos x + 4 sin x. A particular solution to the given diﬀerential equation is therefore 15 2 −2x xe (2 ln x − 3) + 3 cos x + 4 sin x, 4 yp (x) = yp1 (x) + yp2 (x) = and the general solution is 15 2 −2x xe (2 ln x − 3) + 3 cos x + 4 sin x. 4 y (x) = c1 e−2x + c2 xe−2x + 18. The complementary function for the diﬀerential equation is yc (x) = c1 e−2x + c2 xe−2x . To determine a particular solution to the given diﬀerential equation we ﬁrst ﬁnd a particular solution to each of the 4e−2x diﬀerential equations y + 4y + 4y = and y + 4y + 4y = 2x2 − 1 separately, and then we add the 1 + x2 4e−2x is two solutions together. Using variation-of-parameters, a particular solution to y + 4y + 4y = 1 + x2 yp1 (x) = u1 e−2x + u2 xe−2x , where e−2x u1 + xe−2x u2 = 0, −2e−2x u1 + (1 − 2x)e−2x u2 = 4e−2x . 1 + x2 4 4x and u2 = , so that we can choose u1 (x) = 1 + x2 1 + x2 −2 ln (1 + x2 ) and u2 (x) = 4 tan−1 x. Consequently, This pair of equations has a solution u1 = − yp1 (x) = −2e−2x ln (1 + x2 ) + 4xe−2x tan−1 x. According to the method of undetermined coeﬃcients, a particular solution to y + 4y + 4y = 2x2 − 1 is of the form yp2 (x) = A0 + A1 x + A2 x2 . Substitution into the preceding diﬀerential equation and equating coeﬃcients yields 4A2 = 2, 8A2 + 4A1 = 0 and 2A2 + 4A1 + 4A0 = −1. These equations have solution 1 1 A0 = , A1 = −1 and A2 = . Hence, 2 2 yp2 (x) = 1 1 1 − x + x2 = (x − 1)2 . 2 2 2 A particular solution to the given diﬀerential equation is therefore, 1 yp (x) = yp1 (x) + yp2 (x) = −2e−2x ln (1 + x2 ) + 4xe−2x tan−1 x + (x − 1)2 , 2 and the general solution is 1 y (x) = c1 e−2x + c2 xe−2x − 2e−2x ln (1 + x2 ) + 4xe−2x tan−1 x + (x − 1)2 . 2 528 19. The complementary function for the diﬀerential equation is yc (x) = c1 e2x + c2 xe2x + c3 x2 e2x . Let y1 (x) = e2x , y2 (x) = xe2x and y3 (x) = x2 e2x . The system (6.7.22) from the text assumes the form u1 + xu2 + x2 u3 2u1 + (1 + 2x)u2 + (2x + 2x2 )u3 4u1 + (4 + 4x)u2 + (2 + 8x + 4x2 )u3 = 0, = 0, = 36 ln x, where we have divided each equation by e2x . Solving the system we obtain u1 = 18x2 ln x, u2 = −36x ln x, and u3 = 18 ln x from which it follows that u1 = 2x3 (3 ln x − 1), u2 = 9x2 (1 − 2 ln x) and u3 = 18x(ln x − 1). Hence, yp (x) = u1 y1 + u2 y2 + u3 y3 = [2x3 (3 ln x − 1)]e2x + [9x2 (1 − 2 ln x)]xe2x + [18x(ln x − 1)]x2 e2x = x3 e2x (6 ln x − 11). Therefore, the general solution is y (x) = c1 e2x + c2 xe2x + c3 x2 e2x + x3 e3x (6 ln x − 11). 20. The complementary function to the given diﬀerential equation is yc (x) = c1 ex + c2 xex + c3 x2 ex . Let y1 (x) = ex , y2 (x) = xex and y3 (x) = x2 ex . The system (6.7.22) from the text assumes the form u1 + xu2 + x2 u3 u1 + (1 + x)u2 + (2x + x2 )u3 = 0, = 0, 2 = 2, x u1 + (2 + x)u2 + (2 + 4x + x2 )u3 where we have divided each equation by ex . Solving the system we obtain u1 = 1, u2 = −2x−1 , and u3 = x−2 , from which it follows that u1 = x, u2 = −2 ln x, and u3 = −x−1 . Hence, yp (x) = ex (x − 2x ln x − x) = −2xex ln x, and therefore the general solution to the diﬀerential equation is y (x) = c1 ex + c2 xex + c3 x2 ex + ex (x − 2x ln x − x) = −2xex ln x. 21. The complementary function for the diﬀerential equation is yc (x) = c1 e−x + c2 xe−x + c3 x2 e−x . The system (6.7.22) from the text assumes the form u1 + xu2 + x2 u3 −u1 + (1 − x)u2 + (2x − x2 )u3 u1 + (x − 2)u2 + (2 − 4x + x2 )u3 = 0, = 0, = 2 , 1 + x2 where we have divided each equation by e−x . Solving the system we obtain u1 = and u3 = 1 from which it follows that u1 = x − tan−1 x, 1 + x2 yp (x) x2 , 1 + x2 u2 = − 2x , 1 + x2 u2 = − ln (x2 + 1), and u3 = tan−1 x. Thus, = u 1 y1 + u 2 y2 + u 3 y3 = (x − tan−1 x)e−x − xe−x ln (x2 + 1) + x2 e−x tan−1 x. Therefore, the general solution to the diﬀerential equation is y (x) = c1 e−x + c2 xe−x + c3 x2 e−x + (x − tan−1 x)e−x − xe−x ln (x2 + 1) + x2 e−x tan−1 x. 529 22. The complementary function for the diﬀerential equation is yc (x) = c1 + c2 e3x + c3 xe3x . Let z = y . The given diﬀerential equation becomes z − 6z + 9z = 12e3x . Then r2 − 6r + 9 = 0 =⇒ r ∈ {3, 3} =⇒ z1 (x) = e3x and z2 (x) = xe3x are two linearly independent solutions of the homogeneous diﬀerential equation z − 6z + 9z = 0. The system (6.7.22) in the text therefore assumes the form u1 + xu2 3u1 + (3x + 1)u2 = 0, = 12. Solving the system we obtain u1 = −12x and u2 = 12 so u1 = −6x2 and u2 = 12x. Hence, zp (x) = u1 z1 + u2 z2 = (−6x2 )e3x + (12x)(xe3x ) = 6x2 e3x . Therefore, yp (x) = x2 e3x dx = zp dx = 6 2 (9x2 − 6x + 2)e3x , 9 and so the general solution to the diﬀerential equation is 2 y (x) = c1 + c2 e3x + c3 xe3x + (9x2 − 6x + 2)e3x . 9 The method of undetermined coeﬃcients would be an easier way to solve this problem. 23. Given y −y = F (x) =⇒ yc (x) = c1 ex +c2 e−x . Choose y1 (x) = ex and y2 (x) = e−x . Then W [y1 , y2 ](x) = et e−x − e−t ex 1 (ex )(−e−x ) − (e−x )(ex ) = −2. Hence, K (x, t) = = (ex−t − et−x ) = sinh (x − t), so that −2 2 x sinh (x − t)F (t)dt. yp (x) = x0 24. Given y + y − 2y = F (x) =⇒ yc (x) = c1 e−2x + c2 ex . Choose y1 (x) = e−2x and y2 (x) = ex . Then e−2t ex − et e−2x 1 W [y1 , y2 ](x) = (e−2x )(ex ) − (−2e−2x )(ex ) = 3e−x . Hence, K (x, t) = = [ex−t − e2(t−x) ], so −t 3e 3 that x 1 yp (x) = [ex−t − e2(t−x) ]F (t)dt. 3 x0 25. Given y + 5y + 4y = F (x) =⇒ yc (x) = c1 e−4x + c2 e−x . Choose y1 (x) = e−4x and y2 (x) = e−x. e−4t e−x − e−t e−4x = Then W [y1 , y2 ](x) = (e−4x )(−e−x ) − (e−x )(−4e−4x ) = 3e−5x . Hence, K (x, t) = 3e−5t 1 t−x [e − e4(t−x) ], so that 3 1 x t−x yp (x) = [e − e4(t−x) ]F (t)dt. 3 x0 26. Given y + 4y − 12y = F (x) =⇒ yc (x) = c1 e2x + c2 e−6x . Choose y1 (x) = e2x and y2 (x) = e−6x . Then e2t e−6x − e−6t e2x W [y1 , y2 ](t) = (e2t )(−6e−6t ) − (2e2t )(e−6t ) = −8e−4t . Hence, K (x, t) = , so that −8e−4t yp (x) = − 1 8 x [e6(t−x) − e−2(t−x) ]F (t)dt. x0 530 27. Given y + y = sec x =⇒ yc (x) = c1 cos x + c2 sin x. Choose y1 (x) = cos x and y2 (x) = sin x. Then W [y1 , y2 ](x) = (cos x)(cos x) − (sin x)(− sin x) = 1. Hence, K (x, t) = cos t sin x − sin t cos x, so that x x (cos t sin x − sin t cos x)(sec t)dt = yp (x) = 0 (sin x − tan t cos x)dt. 0 Thus yp (x) = x sin x + ln (cos x) cos x. Consequently, y (x) = c1 cos x + c2 sin x + x sin x + ln (cos x) cos x. Then y (0) = 0 yields c1 = 0 and y (0) = 1 yields c2 = 1. Therefore, y (x) = sin x + x sin x + ln (cos x) cos x. 28. Given y − 4y + 4y = 5xe2x =⇒ yc (x) = c1 e2x + c2 xe2x . Choose y1 (x) = e2x and y2 (x) = xe2x . Then e2t xe2x − te2t e2x = e2(x−t) (x − t), W [y1 , y2 ](x) = (e2x )(e2x [2x + 1]) − (2e2x )(xe2x ) = e4x . Hence, K (x, t) = e4t so that x x 5 yp (x) = 5e2(x−t) (x − t)te2t dt = 5e2x (xt − t2 )dt = x3 e2x . 6 0 0 Consequently, 5 y (x) = c1 e2x + c2 xe2x + x3 e2x . 6 Then since y (0) = 1, we have c1 = 1, and since y (0) = 0, we have c2 = −2. Therefore, 5 y (x) = e2x (1 − 2x + x3 ). 6 29. Given y − 2ay + a2 y = F (x) =⇒ yc (x) = c1 eax + c2 xeax . Choose y1 (x) = eax and y2 (x) = xeax . Then eat xeax − teat eax W [y1 , y2 ](x) = e2ax . Hence, K (x, t) = = ea(x−t) (x − t). e2at (a) x x x t eat dt = αeax −2 yp (x) = α ea(x−t) (x − t) 2 dt. 2 2 + β2 t +β t t + β2 0 0 Thus yp (x) = αeax x tan−1 β x β − 1 ln 2 x2 + β 2 β2 . (b) x ea(x−t) (x − t) yp (x) = α 0 Thus yp (x) = αeax [x sin−1 (c) x β eat dt = αeax (β 2 − t2 )1/2 0 (β 2 x t −2 dt. 2 )1/2 −t (β − t2 )1/2 + (β 2 − x2 )1/2 − β 2 ]. x yp (x) = x x ea(x−t) (x − t)eat tα ln tdt = −eax x tα+1 ln tdt + xeat xeax [(ln x)2 − 2 ln x − 2]. 2 (ln x)2 If α = −2 then yp (x) = −eax + ln x + 1 . 2 eax xα+2 2α + 3 If α = −1 and α = −2 then yp (x) = ln x − . (α + 1)(α + 2) (α + 1)(α + 2) If α = −1 then yp (x) = tα ln tdt. 531 30. Given y + y = F (x) =⇒ yc (x) = c1 cos x + c2 sin x. Choose y1 (x) = cos x and y2 (x) = sin x. Then W [y1 , y2 ](x) = 1, and so K (x, t) = cos t sin x − sin t cos x = sin (x − t). Hence, x sin (x − t)F (t)dt yp (x) = x0 so that x sin (x − t)dt. y (x) = c1 cos x + c2 sin x + x0 Imposing the initial conditions yields the two equations c1 cos x0 + c2 sin x0 = y0 − c1 sin x0 + c2 cos x0 = y1 . and Solving these systems for c1 and c2 , we obtain c1 = y0 cos x0 − y1 sin x0 and c2 = y1 cos x0 + y0 sin x0 , so that x y (x) = (y0 cos x0 − y1 sin x0 ) cos x + (y1 cos x0 + y0 sin x0 ) sin x + F (t) sin (x − t)dt x0 x = y0 (cos x cos x0 + sin x sin x0 ) + y1 (sin x cos x0 − cos x sin x0 ) + F (t) sin (x − t)dt x0 x = y0 cos (x − x0 ) + y1 sin (x − x0 ) + F (t) sin (x − t)dt. x0 31. The complementary equation for the diﬀerential equation is yc (x) = c1 erx + c2 xerx + c3 x2 erx . Let y1 (x) = erx , y2 (x) = xerx , and y3 (x) = x2 erx . The system (6.7.22) in the text therefore assumes the form u1 + tu2 + t2 u3 ru1 + (rt + 1)u2 + (rt2 + 2t)u3 2 rt 2 r e u1 + (r t + 2r)ert u2 + (r2 t2 + 4rt + 2)ert u3 Row reducing the augmented matrix, 1 t r rt + 1 r2 ert (r2 t + 2r)ert we obtain t2 2 rt + 2t (r2 t2 + 4rt + 2)ert 1 t t2 0 1 2t 001 u1 (x) = 1 2 F (x) , 2ert u2 = x2 a t F (x) 1 dt, u2 (x) = − er t 2 tF (x) t2 F (x) and u1 = . Thus, rt e 2ert x a 0 0 , F (x) 0 0 . F (x) 2ert By back-substitution, we ﬁnd that u3 = = 0, = 0, = F (x). 2tF (x) 1 dt and u3 (x) = ert 2 x a F (x) dt. 2ert 532 Hence, yp (x) = erx u1 + xerx u2 + x2 erx u3 xerx x 2tF (t) x2 erx erx x t2 F (t) dt − dt + rt rt 2a e 2 e 2 a x 1 F (t)[t2 − 2tx + x2 ]er(x−t) dt = 2a 1x F (t)(x − t)2 er(x−t) dt. = 2a x = a F (t) dt ert 32. (a) Note that F (x) 0 0 . . . is precisely the right-hand side vector in the system (6.7.22), and therefore, the 1 solutions for the unknowns u1 , u2 , . . . , uk are precisely given by F (x)Wk (x) , W [y1 , y2 , . . . , yn ](x) uk = k = 1, 2, . . . , n, by a direct application of Cramer’s Rule. (b) Using part (a), we have yp (x) = y1 (x)u1 (x) + y2 (x)u2 (x) + · · · + yn (x)un (x) x = y1 (x) x0 W1 (t) F (t)dt + y2 (x) W (t) x x0 x W2 (t) F (t)dt + · · · + yn (x) W (t) x0 Wn (t) F (t)dt. W (t) Since the yi (x) depend on x, whereas the integration is performed with respect to t, we can combine the integrals in the preceding equation to obtain x yp (x) = x0 y1 (x)W1 (t) + y2 (x)W2 (t) + · · · + yn (x)Wn (t) F (t)dt = W [y1 , y2 , . . . , yn ](t) x K (x, t)F (t)dt, x0 which is the desired expression. 33. Three linearly independent solutions to the associated homogeneous problem are y1 (x) = e−3x , y2 (x) = e3x , and y3 (x) = e−5x . Then W [y1 , y2 , y3 ](t) = W1 = 0 e3t 0 3e3t 1 9e3t So, K (x, t) = e−5t −5e−5t 25e−5t −2t = −8e , W2 = e−3t −3e−3t 9e−3t e−3t −3e−3t 9e−3t e3t 3e3t 9e3t e−5t −5e−5t 25e−5t 0 e−5t 0 −5e−5t 1 25e−5t = 96e−5t , −8t = 2e , W3 = e−3t −3e−3t 9e−3t 1 3(x−t) e−3x (−8e−2t ) + e3x (2e−8t ) + e−5x (6) = [e + 3e−5(x−t) − 4e−3(x−t) ]. Hence, −5t 96e 48 yp (x) = 1 48 x [e3(x−t) + 3e−5(x−t) − 4e−3(x−t) ]F (t)dt. x0 e3t 3e3t 9e3t 0 0 1 = 6. 533 34. Three linearly independent solutions to the associated homogeneous problem are y1 (x) = e−x , y2 (x) = cos 3x, and y3 (x) = sin 3x. Then W [y1 , y2 , y3 ](t) = W1 = 0 cos 3t 0 −3 sin 3t 1 −9 cos 3t sin 3t 3 cos 3t −9 sin 3t cos 3t −3 sin 3t −9 cos 3t e−t −e−t e−t = 3, W2 = e−t −e−t e−t W3 = e−t −e−t e−t cos 3t −3 sin 3t −9 cos 3t 0 0 1 sin 3t 3 cos 3t −9 sin 3t = 30e−t , 0 sin 3t 0 3 cos 3t 1 −9 sin 3t = e−t [sin 3t + 3 cos 3t], = e−t [cos 3t − 3 sin 3t]. So, e−x (3) − cos 3xe−t [sin 3t + 3 cos 3t] + sin 3xe−t [cos 3t − 3 sin 3t] 30e−t 1 = [3et−x + (sin 3x cos 3t − cos 3x sin 3t) − 3(cos 3x cos 3t + sin 3x sin 3t)] 30 1 [3et−x + sin (3(x − t)) − 3 cos (3(x − t))]. = 30 K (x, t) = Hence, yp (x) = x 1 30 [3et−x + sin (3(x − t)) − 3 cos (3(x − t))]F (t)dt. x0 35. Three linearly independent solutions to the associated homogeneous problem are y1 (x) = e−4x , y2 (x) = xe−4x , and y3 (x) = e2x . Then e−4t W [y1 , y2 , y3 ](t) = −4e−4t 16e−4t 0 te−4t (1 − 4t)e−4t W1 = 0 1 (−8 + 16t)e−4t e2t 2e2t = 36e−6t , 4e2t e2t e−4t −2t 2t 2e = e (6t − 1), W2 = −4e−4t 2t 4e 16e−4t e−4t W3 = −4e−4t 16e−4t Hence, K (x, t) = quently, te−4t (1 − 4t)e−4t (−8 + 16t)e−4t te−4t (1 − 4t)e−4t (−8 + 16t)e−4t 0 e2t 0 2e2t = −6e−2t , 1 4e2t 0 0 = e−8t . 1 1 4(t−x) e−2t (6t − 1)e−4x − 6e−2t (xe−4x ) + e−8t (e2x ) = [e (6(t − x) − 1) + e2(x−t) ]. Conse−6t 36e 36 yp (x) = 1 36 x [e4(t−x) (6(t − x) − 1) + e2(x−t) ]F (t)dt. x0 36. Three linearly independent solutions to the associated homogeneous problem are y1 (x) = e2x cos 3x, y2 (x) = e2x sin 3x, and y3 (x) = e3x . Then e2t cos 3t e2t sin 3t 2t e (2 sin 3t + 3 cos 3t) W [y1 , y2 , y3 ](t) = e (2 cos 3t − 3 sin 3t) e2t (−5 cos 3t − 12 sin 3t) e2t (−5 sin 3t + 12 cos 3t) 2t e3t 3e3t 30e7t . 9e3t 534 Furthermore, 0 e2t sin 3t 2t e (2 sin 3t + 3 cos 3t) W1 (t) = 0 1 e2t (−5 sin 3t + 12 cos 3t) e3 t 3e3t = e5t (sin 3t − 3 cos 3t), 9e3t e2t cos 3t 0 e3t 0 3e3t = −e5t (cos 3t + 3 sin 3t), W2 (t) = e (2 cos 3t − 3 sin 3t) 2t e (−5 cos 3t − 12 sin 3t) 1 9e3t 2t e2t cos 3t e2t sin 3t 0 2t e (2 sin 3t + 3 cos 3t) 0 = 3e4t . W3 = e (2 cos 3t − 3 sin 3t) 2t 2t e (−5 cos 3t − 12 sin 3t) e (−5 sin 3t + 12 cos 3t) 1 2t Hence, e5t (sin 3t − 3 cos 3t)e2x cos 3x − e5t (cos 3t + 3 sin 3t)e2x sin 3x + 3e4t e3x 30e7t 1 2(x−t) 1 = K (x, t) = e [sin (3(t − x)) − 3 cos (3(t − x))] + e3(x−t) . 30 10 K (x, t) = Consequently, x yp (x) = x0 1 1 2(x−t) e [sin (3(t − x)) − 3 cos (3(t − x))] + e3(x−t) F (t)dt. 30 10 37. Three linearly independent solutions to the associated homogeneous problem are y1 (x) = ex , y2 (x) = e2x , and y3 (x) = e−4x . Then et e2t e−4t t 2t −4e−4t . W [y1 , y2 , y3 ](t) = e 2e t 2t e 4e 16e−4t Furthermore, 0 e2t W1 (t) = 0 2e2t 1 4e2t e−4t et −2t −4t −4e = −6e , W2 (t) = et −4t 16e et 0 e−4t et −3t −4t 0 −4e = 5e , W3 (t) = et −4t 1 16e et Consequently, 1 ex (−6e−2t ) + e2x (5e−3t ) + e−4x (e3t ) 30e−t 1 −4(x−t) = e + 5e2(x−t) − 6ex−t , 30 K (x, t) = and a particular solution to the given diﬀerential equation is yp (x) = 1 30 x e−4(x−t) + 5e2(x−t) − 6ex−t F (t)dt. x0 Solutions to Section 6.8 True-False Review: e2t 2e2t 4e2t 0 0 = e3t . 1 535 1. FALSE. First of all, the given equation only addresses the case of a second-order Cauchy-Euler equation. Secondly, the term containing y must contain a factor of x (see Equation 6.8.1): x2 y + a1 xy + a2 y = 0. 2. TRUE. The indicial equation (6.8.2) in this case is r(r − 1) − 2r − 18 = r2 − 3r − 18 = 0, with real and distinct roots r = 6 and r = −3. Hence, y1 (x) = x6 and y2 (x) = x−3 are two linearly independent solutions to this Cauchy-Euler equation. 3. FALSE. The indicial equation (6.8.2) in this case is r(r − 1) + 9r + 16 = r2 + 8r + 16 = (r + 4)2 = 0, and so we have only one real root, r = 4. Therefore, the only solutions to this Cauchy-Euler equation of the form y = xr take the form y = cx−4 . Therefore, only one linearly independent solution of the form y = xr to the Cauchy-Euler equation has been obtained. 4. TRUE. The indicial equation (6.8.2) in this case is r(r − 1) + 6r + 6 = r2 + 5r + 6 = (r + 2)(r + 3) = 0, with roots r = −2 and r = −3. Therefore, the general solution to this Cauchy-Euler equation is y (x) = c1 x−2 + c2 x−3 = c2 c1 + 3. x2 x Therefore, as x → +∞, y (x) for all values of the constants c1 and c2 . 5. TRUE. A solution obtained by the method in this section containing the function ln x implies a repeated root to the indicial equation. In fact, such a solution takes the form y2 (x) = xr1 lnx. Therefore, if y (x) = ln x/x is a solution, we conclude that r1 = r2 = −1 is the only root of the indicial equation r(r − 1)+ a1 r + a2 = − 0. From Case 2 in this section, we see that −1 = − a12 1 , and so a1 = 3. Moreover, the discriminant of Equation (6.8.3) must be zero: (a1 − 1)2 − 4a2 = 0. That is, 22 − 4a2 = 0. Therefore, a2 = 1. Therefore, the Cauchy-Euler equation in question, with a1 = 3 and a2 = 1, must be as given. 6. FALSE. The indicial equation (6.8.2) in this case is r(r − 1) − 5r − 7 = r2 − 6r − 7 = (r − 7)(r + 1) = 0, with roots r = 7 and r = −1. Therefore, the general solution to this Cauchy-Euler equation is y (x) = c1 x7 + c2 x−1 , and this is not an oscillatory function of x for any choice of constants c1 and c2 . Problems: 1. We are given that x2 y − xy + 5y = 0. If y (x) = xr then the indicial equation is r2 − 2r + 5 = 0 =⇒ r = 1 ± 2i =⇒ y1 (x) = x sin (2 ln x), y2 (x) = x cos (2 ln x) =⇒ y (x) = x[c1 sin (2 ln x) + c2 cos (2 ln x)]. 536 2. We are given that x2 y − 6y = 0. If y (x) = xr then the indicial equation is r2 − r − 6 = 0 =⇒ r ∈ {−2, 3} =⇒ y (x) = c1 x−2 + c2 x3 . 3. We are given that x2 y − 3xy + 4y = 0. If y (x) = xr then the indicial equation is r2 − 4r + 4 = 0 =⇒ r ∈ {2, 2} =⇒ y1 (x) = x2 , y2 (x) = x2 ln x =⇒ y (x) = x2 (c1 + c2 ln x). 4. We are given that x2 y − 4xy + 4y = 0. If y (x) = xr then the indicial equation is r2 − 5r + 4 = 0 =⇒ r ∈ {1, 4} =⇒ y (x) = c1 x + c2 x4 . 5. We are given that x2 y + 3xy + y = 0. If y (x) = xr then the indicial equation is r2 + 2r + 1 = 0 =⇒ r ∈ {−1, −1} =⇒ y1 (x) = x−1 , y2 (x) = x−1 ln x =⇒ y (x) = x−1 (c1 + c2 ln x). 6. We are given that x2 y + 5xy + 13y = 0. If y (x) = xr then the indicial equation is r2 + 4r + 13 = 0 =⇒ r = −2 ± 3i =⇒ y (x) = x−2 [c1 cos (3 ln x) + c2 sin (3 ln x)]. 7. We are given that x2 y − xy − 35y = 0. If y (x) = xr then the indicial equation is r2 − 2r − 35 = 0 =⇒ r ∈ {−5, 7} =⇒ y1 (x) = x7 , y2 (x) = x−5 =⇒ y (x) = c1 x7 + c2 x−5 . 8. We are given that x2 y + xy + 16y = 0. If y (x) = xr then the indicial equation is r2 + 16 = 0 =⇒ r ∈ {−4i, 4i} =⇒ y (x) = c1 cos (4 ln x) + c2 sin (4 ln x). 9. If y (x) = xr then the indicial equation is r2 − m2 = 0 =⇒ r ∈ {−m, m} =⇒ y (x) = c1 xm + c2 x−m . 10. If y (x) = xr then the indicial equation is r2 − 2mr + m2 = 0 =⇒ r ∈ {m, m} =⇒ y1 (x) = xm , y2 (x) = xm ln x =⇒ y (x) = xm (c1 + c2 ln x). 11. If y (x) = xr then the indicial equation is r2 − 2mr + (m2 + k 2 ) = 0 =⇒ r ∈ {m + ki, m − ki} =⇒ y (x) = xm [c1 cos (k ln x) + c2 sin (k ln x)]. 12. dy dz 1 dy d2 y 1 d2 y 1 dy dy = = and = 2 2 − 2 . Hence, substituting (a) Let x = ez . Then z = ln x and 2 dx dz dx x dz dx x dz x dz into the given diﬀerential equation we obtain x2 1 1 dy −2 2 x x dz + a1 x 1 dy x dz + a2 y = 0, so that d2 y dy + (a1 − 1) + a2 y = 0. 2 dz dz 537 (b) Let x = ez . Then dy2 (x) dy1 (x) − y2 (x) dx dx dy2 (z ) dy1 (z ) = y1 (z ) − y2 (z ) dx dx dy2 (z ) dz dy1 (z ) dz = y1 (z ) − y2 (x) dz dx dz dx dz dy2 (z ) dy1 (z ) = y1 (z ) − y2 (z ) dx dz dz dz = W [y1 , y2 ](z ). dx Thus, if y1 (z ) and y2 (z ) are linearly independent solutions of (6.8.24), then y1 (ln x) and y2 (ln x) are linearly independent solutions of (6.8.23). W [y1 , y2 ](x) = y1 (x) 13. If y (x) = (−x)r , y = r(−x)r−1 , y = r(r − 1)(−x)r−2 . Substituting these results into the original equation and simplifying yields x2 r(r − 1)(−x)r−2 + ax(r)(−x)r−1 + b(−x)r = 0 =⇒ r2 − r + ar + b = 0 =⇒ r2 + r(a − 1) + b = 0. 14. Consider x2 y + 4xy + 2y = 0. For y (x) = xr , the indicial equation is given by r2 + 3r + 2 = 0 =⇒ r ∈ {−1, −2} =⇒ yc (x) = c1 x−1 + c2 x−2 . Let y1 (x) = x−1 and y2 (x) = x−2 . Then W [y1 , y2 ](x) = (x−1 )(−2x−3 ) − (x−2 )(−x−2 ) = −x−4 = 0 for x > 0, so that yp (x) = −y1 4 y1 F dx = W x y2 F dx + y2 W ln xdx − 4 x2 x ln xdx = 2 ln x − 3. Thus, y (x) = c1 x−1 + c2 x−2 + 2 ln x − 3. 15. Consider x2 y − 4xy + 6y = 0. For y (x) = xr , the indicial equation is given by r2 − 5r + 6 = 0 =⇒ r ∈ {2, 3} =⇒ yc (x) = c1 x2 + c2 x3 . Let y1 (x) = x2 and y2 (x) = x3 . Then W [y1 , y2 ](x) = (x2 )(3x2 ) − (x3 )(2x) = x4 = 0 for x > 0, so that yp (x) = −y1 y2 F dx + y2 W y1 F dx = −x2 W = −x2 x3 (x4 sin x) dx + x3 x2 x4 x sin xdx + x3 x2 (x4 sin x) dx x2 x4 sin xdx = −x2 sin x. Thus, y (x) = c1 x2 + c2 x3 − x2 sin x. 16. Consider x2 y + 6xy + 6y = 0. For y (x) = xr , the indicial equation is given by r2 + 5r + 6 = 0 =⇒ r ∈ {−3, −2} =⇒ yc (x) = c1 x−2 + c2 x−3 . Let y1 (x) = x−2 and y2 (x) = x−3 . Then W [y1 , y2 ](x) = (x−2 )(−3x−4 ) − (x−3 )(−2x−3 ) = −x−6 = 0 so that yp (x) = −y1 y2 F dx + y2 W Thus, y (x) = c1 x−2 + c2 x−3 + y1 F 1 (1/x3 )4e2x 1 (1/x2 )(4e2x ) dx = − 2 dx + 3 dx 2 (−1/x6 ) W x x x x2 (−1/x6 ) 4 4 e2x (x − 1) =2 xe2x dx − 3 x2 e2x dx = . x x x3 e2x (x − 1) . x3 538 17. Consider x2 y − 3xy + 4y = 0. For y (x) = xr , the indicial equation is given by r2 − 4r + 4 = 0 =⇒ r ∈ {2, 2} =⇒ yc (x) = c1 x2 + c2 x2 ln x. Let y1 (x) = x2 and y2 (x) = x2 ln x. Then W [y1 , y2 ](x) = (x2 )(2x ln x) − (x2 ln x)(2x) = x3 = 0 for x > 0, so that yp (x) = −y1 y2 F dx + y2 W x2 ln x(x2 / ln x) y1 F dx + x2 ln x dx = −x2 W x2 x3 = −x2 ln x + x2 ln x ln | ln x| x2 (x2 / ln x) dx x2 x3 for x > 0. Hence, y (x) = x2 [c1 + c2 ln x + ln x(ln | ln x| − 1)]. 18. Consider x2 y + 4xy + 2y = 0. For y (x) = xr , the indicial equation is given by r2 + 3r + 2 = 0 =⇒ r ∈ {−1, −2} =⇒ yc (x) = c1 x−1 + c2 x−2 . Let y1 (x) = x−2 and y2 (x) = x−1 . Then W [y1 , y2 ](x) = (x−2 )(−x−2 ) − (x−1 )(−2x−3 ) = x−4 = 0 for x > 0, so that yp (x) = −y1 y2 F dx + y2 W Therefore, y (x) = c1 x−1 + c2 x−2 − y1 F 1 dx = − 2 W x 1 =− 2 x (1/x) cos x 1 (1/x2 ) cos x dx + dx x2 (1/x4 ) x x2 (1/x4 ) 1 1 x cos xdx + cos xdx = − 2 cos x. x x 1 cos x. x2 19. Consider x2 y + xy + 9y = 0. For y (x) = xr , the indicial equation is given by r2 + 9 = 0 =⇒ r ∈ {−3i, 3i} =⇒ yc (x) = c1 cos (3 ln x) + c2 sin (3 ln x). Let y1 (x) = cos (3 ln x) and y2 (x) = sin (ln x). Then W [y1 , y2 ](x) = [cos (3 ln x)] 3 cos (3 ln x) 3 sin (3 ln x) 3 − [sin (3 ln x)] − = =0 x x x for x > 0, so that yp (x) = −y1 y2 F dx+y2 W y1 F dx = − cos (3 ln x) W sin (3 ln x)9 ln x dx+sin (3 ln x) x2 (3/x) cos (3 ln x)(9 ln x) dx. x2 (3/x) Making the change of variables u = 3 ln x in the preceding integrals yields 1 1 yp (x) = − cos(3 ln x) u sin u du + sin(3 ln x) u cos u du 3 3 1 1 1 = − cos(3 ln x)(−u cos u + sin u) + sin(3 ln x)(u sin u + cos u) = ln x. 3 3 3 Consequently, y (x) = c1 cos (3 ln x) + c2 sin (3 ln x) + 1 ln x. 3 20. Consider x2 y − xy + 5y = 0 For y (x) = xr , the indicial equation is given by r2 − 2r + 5 = 0 =⇒ r ∈ {1 − 2i, 1+2i} =⇒ yc (x) = x[c1 cos (2 ln x)+ c2 sin (2 ln x)]. Let y1 (x) = x cos (2 ln x) and y2 (x) = x sin (2 ln x). Then W [y1 , y2 ](x) = [x cos (2 ln x)][2 cos (2 ln x) + sin (2 ln x)] − [x sin (2 ln x)][cos (2 ln x) − 2 sin (2 ln x)] = 2x = 0 539 for x > 0, so that y2 F y1 F dx + y2 dx W W x sin (2 ln x)[8x(ln x)2 ] x cos (2 ln x)[8x(ln x)2 ] = −x cos (2 ln x) dx + x sin (2 ln x) dx 2 (2x) x x2 (2x) sin(2 ln x) · (ln x)2 cos(2 ln x) · (ln x)2 = −4x cos(2 ln x) dx + 4x sin(2 ln x) dx. x x yp (x) = −y1 Making the change of variables u = 2 ln x in the preceding integrals yields x cos (2 ln x) 2 = x[2(ln x)2 − 1]. yp (x) = − u2 sin u du + x sin (2 ln x) 2 u2 cos u du Thus, y (x) = x[c1 cos (2 ln x) + c2 sin (2 ln x) + 2(ln x)2 − 1]. 21. Consider x2 y − (2m − 1)xy + m2 y = 0. For y (x) = xr , the indicial equation is given by r2 − 2mr + m2 = 0 =⇒ r ∈ {m, m} =⇒ yc (x) = c1 xm + c2 xm ln x. Let y1 (x) = xm and y2 (x) = xm ln x. Then W [y1 , y2 ](x) = (xm )[xm−1 (1 + m ln x)] − (xm ln x)(mxm−1 ) = x2m−1 = 0 for x > 0, so that yp (x) = −y1 y2 F dx + y2 W y1 F dx = −xm W = −xm xm ln x[xm (ln x)k ] xm [xm (ln x)k ] dx + xm ln x dx 2 (x2 )m−1 x x2 (x2 )m−1 (ln x)k+1 (ln x)k dx + xm ln x dx. (21.1) x x Case 1: If k = −1 and k = −2, then Equation (21.1) becomes yp (x) = − xm (ln x)k+2 (ln x)k+1 xm (ln x)k+2 + xm ln x = . k+2 k+1 (k + 1)(k + 2) Case 2: If k = −1, then Equation (21.1) becomes yp (x) = −xm 1 dx + xm ln x x 1 dx = (ln | ln x| − 1)xm ln x. x ln x Case 3: If k = −2 then Equation (21.1) becomes yp (x) = −xm 1 dx + xm ln x x ln x 1 dx = −xm (1 + ln | ln x|). x(ln x)2 22. (a) The indicial equation is r(r − 1) − r + 5 = 0 =⇒ r ∈ {1 + 2i, 1 − 2i} so the √ general solution to √ the diﬀerential equation is y (x) = x[c1 cos (2 ln x) + c2 sin (2 ln x)]. Given y (1) = 2 =⇒ c1 = 2. √ 2√ 2 Further, y (x) = 2 cos (2 ln x) + c2 sin (2 ln x) + x − 2 sin (2 ln x) + c2 cos (2 ln x) , so that y (1) = x x 540 √ √ √ 3 2 =⇒ c2 = 2. Hence, y (x) = 2x[cos (2 ln x) + sin (2 ln x) = 2x √ √ 2 2 cos (2 ln x) + sin (2 ln x) = 2 2 2x[cos (2 ln x) cos (π/4) + sin (2 ln x) sin (π/4)] = 2x cos (2 ln x − π/4). (b) The zeros of y (x) occur when 2 ln x − π/4 = (2n + 1)π/2, n = 0, ±1, ±2, ... Solving for x gives: x = eπ(4n+3)/8 , n = 0, ±1, ±2, ... The zeros corresponding to n = −2, −1, 0, 1, 2 are, respectively, 0.1403669226, 0.6752319066, 3.248187814, 15.62533401, 75.16531584. (c) y x Figure 88: Figure for Problem 22 23. (a) The indicial equation is r(r − 1) + r + 25 = 0 =⇒ r ∈ {5i, −5i}. The general solution to the diﬀerential √ √ 33 15 33 then c1 = and y (1) = equation is therefore y (t) = c1 cos (5 ln t) + c2 sin (5 ln t). Given y (1) = 2 2 2 3 then c2 = . Hence, the solution to the initial-value problem is 2 √ 1 3 cos (5 ln t) + sin (5 ln t) = 3 cos (5 ln t − π/6). y (t) = 3 2 2 (b) The zeros of y (t) occur when 5 ln t − π/6 = (2n + 1)π/2, n = 0, ±1, ±2, ... Solving for t yields t = eπ(6n+5)/30 . 541 (c) y 3 x 2 -3 Figure 89: Figure for Problem 23(c) (d) The motion is oscillatory but not periodic. Consequently the system is not performing simple harmonic motion. 24. (a) We have Axa c1 c2 √ cos (b ln x) + √ 2 sin (b ln x) 2+c 2 2+c 2 c1 c1 c1 + c2 2 2 2 Axa [cos (b ln x) sin φ + sin (b ln x) cos φ], =√ 2 c1 + c2 2 Axa [c1 cos (b ln x) + c2 sin (b ln x)] = √ where cos φ = √ c1 c2 and sin φ = √ 2 . Hence, c1 2 + c2 2 c1 + c2 2 y (x) = Axa cos (b ln x − φ). (b) The zeros of the solution arise when b ln x − φ = (2k + 1) π, k = 0, ±1, ±2, ...; 2 that is, when xk = e[(2k+1)π/2+φ]/b , k = 0, ±1, ±2, ... Hence there are an inﬁnite number of zeros. As k −→ −∞, the zeros of the function approach x = 0, whereas if k −→ +∞, the zeros of the function also tend to +∞. The distance between successive zeros is ∆xk = e[(2k+3)π/2+φ]/b − e[(2k+1)π/2+φ]/b = e(2kπ+φ)/b (e3π/b − eπ/b ) = e[2π(k+1)+φ]/b (eπ/b − e−π/b ) = 2e[2π(k+1)+φ]/b sinh (π/b). We see that limk→−∞ ∆xk = 0. Therefore, as x −→ 0+ , the distance between successive zeros also approaches zero. On the other hand, limk→∞ ∆xk = ∞ so as x −→ ∞, the distance between successive zeros increases without bound. 542 (c) We have y (x) = Axa cos (b ln x − φ). If a > 0: The general behavior is oscillatory. The amplitude increases as x −→ ∞, whereas the amplitude approaches zero as x −→ 0+ . If a < 0: The general behavior is oscillatory. The amplitude approaches zero as x −→ ∞, whereas the amplitude increases without bound as x −→ 0+ . If a = 0: The general behavior is oscillatory with constant amplitude. y y x x y x Figure 90: Figure for Problem 24 Solutions to Section 6.9 Problems: 1. We are given that y1 (x) = x2 . Let y2 (x) = x2 u so y2 = 2xu + x2 u , and y2 = x2 u + 4xu + 2u. Substituting these results into the given diﬀerential equation and simplifying we obtain xu + u = 0 =⇒ 1 u = − =⇒ u(x) = c1 ln x + c2 . Letting c1 = 1 and c2 = 0, we obtain a second linearly independent u x solution, y2 (x) = x2 ln x. 2. We are given that y1 (x) = x sin x. Let y2 (x) = (x sin x)u so that y2 = xu cos x + (x sin x)u + u sin x, and y2 = −xu sin x + 2((x cos x)u + (x sin x)u + 2u cos x + 2(sin x)u . Substituting these results into the original u 2 cos x equation and simplifying we obtain sin xu + 2 cos xu = 0 =⇒ =⇒ u = c1 csc2 x =⇒ u(x) = =− u sin x −c1 cot x + c2 . Letting c1 = −1 and c2 = 0, we obtain a second linearly independent solution, y2 (x) = x cos x. 3. We are given y1 (x) = ex . Let y2 (x) = uex so that y2 = ex u + ex u , and y2 = ex u + 2ex u + ex u . Substituting these results into the given diﬀerential equation and simplifying we obtain xu + u = 0 =⇒ u 1 c1 = − =⇒ u = =⇒ u(x) = c1 ln x + c2 . Letting c1 = 1 and c2 = 0, we obtain a second linearly u x x independent solution, y2 (x) = ex ln x. 543 4. We are given y1 (x) = sin x2 . Let y2 (x) = u sin x2 so that y2 = u sin x2 + 2xu cos x2 , and y2 = u sin x2 + 4xu cos x2 + 2u cos x2 − 4x2 u sin x2 . Substituting these results into the given diﬀerential equation 1 u 1 = − 4x cot x2 =⇒ u = c1 x csc2 (x2 ) =⇒ u(x) = − c1 cot x2 + c2 . Letting and simplifying we obtain u x 2 c1 = 1 and c2 = 0, we obtain a second linearly independent solution, y2 (x) = cos(x2 ). 5. We are given y1 (x) = x and −1 < x < 1. Let y2 (x) = ux so that y2 = u x + u and y2 = u x + 2u . Substituting these results into the original equation and simplifying we obtain u (x − x3 ) + u (2 − 4x2 ) = 4x2 − 2 u 2 1 1 u = =⇒ =− − + =⇒ ln |u | = − ln |x| − ln |1 + x| + ln |1 − x| + c1 =⇒ 0 =⇒ u x − x3 u x 1+x 1−x c1 1 x+1 1 u (x) = , −1 < x < 1 =⇒ u(x) = c2 ln − + c3 . Letting c2 = 1 and c3 = 0 we 2 )x2 (1 − x 2 1−x x x+1 1 − 1. obtain a second linearly independent solution, y2 (x) = x ln 2 1−x 6. We are given y1 (x) = x−1/2 sin x. Letting y2 (x) = ux−1/2 sin x, we have 1 y2 = ux−1/2 cos x + (u x−1/2 − ux−3/2 ) sin x 2 and 1 y2 = u x−1/2 cos x + u − x−3/2 cos x − x−1/2 sin x + u x−1/2 sin x + u 2 1 x−1/2 cos x − x−3/2 sin x 2 1 1 3 − u x−3/2 sin x − u x−3/2 cos x − x−5/2 sin x . 2 2 2 Substituting these results into the given diﬀerential equation and simplifying we obtain 4u x3/2 sin x + u = −2 cot x =⇒ ln |u | = −2 ln | sin x| + ln |c1 | =⇒ u (x) = c2 csc2 (x) =⇒ u = 8u x3/2 cos x = 0 =⇒ u −c2 cot(x) + c3 . Letting c2 = 1 and c3 = 0 we obtain a second linearly diﬀerential solution, y2 (x) = x−1/2 cos x. 7. (a) If y1 (x) = xλ , then the corresponding indicial equation is given by λ2 − 2mλ + m2 = 0 =⇒ λ ∈ {m, m} so one particular solution of the given diﬀerential equation is y1 (x) = xm . (b) Let y2 (x) = xm u so y2 = xm u + mxm−1 u and y2 = xm u + muxm−2 + m2 xm−2 u + 2mxm−1 u . u = Substituting these results into the original equation and simplifying yields xm+1 u + xm+2 u = 0 =⇒ u 1 c − =⇒ u (x) = =⇒ u(x) = c ln x + c1 . Letting c = 1 and c1 = 0 we obtain a second linearly independent x xm solution, y2 (x) = x ln x. 8. If y (x) = a0 + a1 x + a2 x2 , then y (x) = a1 + 2a2 x and y (x) = 2a2 Substituting these results into the original equation and simplifying yields −2a1 x + (8a2 − 2a0 ) = 0 =⇒ a1 = 0 and a0 = 4a2 . Hence, y (x) = c1 (4+x2 ) is a solution to the diﬀerential equation where c1 is an arbitrary constant. Let y1 (x) = 4+x2 . If y (x) = (4 + x2 )u, then y (x) = (4 + x2 )u + 2xu and y (x) = (4 + x2 )u + 4xu + 2u. Substituting these results into the original equation and simplifying yields (4 + x2 )u + 4xu = 0. Substituting v = u into dv 4x the last equation gives (4 + x2 )v + 4xv = 0. Separating the variables gives =− dx =⇒ ln |v | = v 4 + x2 544 c2 du x c2 2x . Hence, =⇒ u(x) = c2 tan−1 ( ) + 2 = + c3 . 2 )2 2 )2 (4 + x dx (4 + x 2 x +4 Thus, y (x) = (4 + x2 )u becomes −2 ln (4 + x2 ) + 13 =⇒ v = y (x) = c2 [(4 + x2 ) tan−1 (x/2) + 2x] + c3 (4 + x2 ), which is the general solution to the given diﬀerential equation. A second linearly independent solution is y2 (x) = (4 + x2 ) tan−1 (x/2) + 2x. 9. (a) If y1 (x) = eαx then y1 (x) = αeαx and y1 (x) = α2 eαx so from the original equation we have xy1 + (αx + β )y1 + αβy1 = x(α2 eαx ) − (αx + β )(αeαx ) + αeαx = α2 x − (αx + β )α + αβ = 0. Hence, y1 (x) = eαx is a solution to the diﬀerential equation. (b) If y (x) = eαx u, then y = (u + αu)eαx and y = [u + α(2u + αu)]eαx . Substituting these results u = −α + βx−1 =⇒ u = into the original equation and simplifying yields xu + (αx − β )u = 0 =⇒ u c1 xβ e−αx =⇒ u(x) = c1 xβ e−αx dx + c2 . Letting c1 = 1 and c2 = 0 we obtain y2 (x) = xβ e−αx dx as a second linearly independent solution. (c) If α = 1 and β is a non-negative integer then y2 (x) = ex y2 (x) = ex (−xβ e−x + β = −β ! = −β ! = −β ! = −β ! = −β ! xβ e−x dx. Use repeated integration by parts: xβ −1 e−x dx) xβ ex − xβ −1 e−x dx β! (β − 1)! xβ ex − −xβ −1 e−x + (β − 1) xβ −2 e−x dx β! (β − 1)! xβ xβ −1 ex + − xβ −2 e−x dx β! (β − 1)! (β − 2)! xβ −1 x2 ex xβ + + ··· + − xe−x dx β! (β − 1)! 2! 1! xβ xβ −1 x2 + + ··· + +x+1 . β! (β − 1)! 2! 10. We are given that y1 (x) = e3x . If y2 (x) = ue3x , then y2 = (u + 3u)e3x and y2 = √ + 6u + 9)e3x . (u Substituting these results into the original equation and simplifying we obtain u = 15 x, which implies that u = 10x3/2 + c1 and u = 4x5/2 + c1 x + c2 . Thus, the general solution is y (x) = e3x (4x5/2 + c1 x + c2 ). 11. We are given that y1 (x) = e2x . If y2 (x) = ue2x , then y2 = (u + 2u)e2x and y2 = (u + 4u + 4u)e2x . Substituting these results into the original equation and simplifying yields u = 4 ln x, which implies that u = 4x ln x − 4x + c2 and u = c1 + c2 x + x2 (2 ln x − 3). Thus the general solution is y (x) = e2x [c1 + c2 x + x2 (2 ln x − 3)]. 545 √ √ √ 1 12. We are given that y1 (x) = x. If y2 (x) = u x, then we compute that y2 = u x + ux−1/2 and 2 √ 1 −3/2 −1/2 y2 = u x + u x − ux . Substituting these results into the original equation and simplifying, we 4 ln x d(xu ) ln x 1 obtain 4x2 u + 4xu = ln x =⇒ xu + u = =⇒ = =⇒ xu = (ln x)2 + c‘ 1 =⇒ u(x) = 4x dx 4x 8 1 (ln x)3 + c1 ln x + c2 . Consequently, the general solution is 24 √ 1 y (x) = x (ln x)3 + c1 ln x + c2 . 24 13. We are given that y1 (x) = sin x where 0 < x < π . If y2 (x) = u sin x, then y2 = u sin x + u cos x and y2 = u sin x + 2u cos x − u sin x. Substituting these results into the original equation and simplifying c3 x + =⇒ u(x) = yields u sin x + 2u cos x = csc x =⇒ u sin2 x + 2u sin x cos x = 1 =⇒ u = sin2 x sin2 x −x cot x + ln (sin x) − c3 cot x + c4 . Choosing c3 = c4 = 0, we obtain u(x) = −x cot x + ln (sin x). Hence, the general solution is y (x) = c1 sin x + c2 sin x[ln (sin x) − x cot x]. 14. We are given that y1 (x) = e2x where x > 0. If y2 (x) = ue2x , then we compute y2 = (u + 2u)e2x and y2 = (u + 4u + 4u)e2x . Substituting these results into the original equation and simplifying we obtain 2x − 1 u x + u (2x − 1) = 8x2 , or equivalently, u + u = 8x. An integrating factor for the last equation is x 2x 2x e de 1 1 so that, u = 8e2x =⇒ u = 4x + c3 xe−2x . Hence, u(x) = 2x2 + c3 − xe−2x − e−2x + c2 . x2 dx x2 2 4 Thus, the general solution is y (x) = 2x2 e2x + c1 (2x + 1) + c2 e2x . 15. We are given that y1 (x) = x2 , where x > 0. If y2 (x) = ux2 then y2 = u x2 + 2xu and y2 = u x2 + 4u x + 2u. Substituting these results into the original diﬀerential equation and simplifying we obtain u x + u = 8x =⇒ xu = 4x2 + c2 =⇒ u(x) = 2x2 + c2 ln x + c1 . Hence, the general solution is y (x) = x2 (c1 + c2 ln x + 2x2 ). 16. Since y1 (x) is a solution of the associated homogeneous diﬀerential equation we have y1 + py1 + qy1 = 0. Let y2 (x) = u(x)y1 (x) so that y2 = u y1 + uy1 and y2 = u y1 + 2u y1 + uy1 . Substituting these results into the original equation and simplifying we obtain the following equalities: = u y1 + 2u y1 + uy1 + p(u y1 + uy1 ) + quy1 = u(y1 + py1 + qy1 ) + u y1 + 2u y1 + pu y1 = y1 u + (2y1 + py1 )u y . = y1 u + 2 1 + p u y1 y1 r = y 1 v + 2 + p v = y1 =r y1 y1 Therefore, y2 (x) = u(x)y1 (x) is a solution to the given diﬀerential equation provided v = u is a solution of y r the equation v + 2 1 + p v = . The last diﬀerential equation has integrating factor y1 y1 y2 + py2 + qy2 x I (x) = e ( 2y 1 y1 +p)dt = e2 x y1 y1 dt x e p dt 2 = y1 e p dt , 546 so Since x x d r [I (t)v ] = I (t) =⇒ I (x)v = dt y1 I (t)(r/y1 )dt + c2 I −1 (x). I (t)(r/y1 )dt + c2 =⇒ v (x) = I −1 (x) du = v , we have dx x [I −1 (x) u(x) = x I −1 (t)dt. I (t)(r/y1 )dt]dx + c1 + c2 Hence, x y (x) = y1 (x) [I −1 (x) x I (t)(r/y1 )dt]dx + c1 + c2 Linearly independent solutions: y1 (x) and y1 (x) Particular solution: y1 [I −1 (x) x x I −1 (t)dt . I −1 (t)dt. I (t)(r/y1 )dt]dx. Solutions to Section 6.10 Problems: 3 3 3 3 3 3 3 1. Ly = (D2 + 3)(ex ) = D2 (ex ) + 3ex = D(3x2 ex ) + 3ex = ex (9x4 + 6x) + 3ex 3 = 3ex (3x4 + 2x + 1). 2. Ly = 5 · 1 1+x2 = 5 1+x2 . 1 1 3. Ly = x D2 + xD − 2 (4 sin x) = x D2 (4 sin x) + xD(4 sin x) − 2(4 sin x) 4 = − x sin x + 4x cos x − 8 sin x. 4. Ly = [x2 D3 − (sin x)D](e2x + cos x) = x2 D3 (e2x + cos x) − (sin x)D(e2x + cos x) = x2 (8e2x + sin x) − sin x(2e2x − sin x). 5. Ly = [(x2 + 1)D3 − (cos x)D + 5x2 ](ln x + 8x5 ) = (x2 + 1)D3 (ln x + 8x5 ) − (cos x)D(ln x + 8x5 ) + 5x2 (ln x + 8x5 ) = (x2 + 1) = 2 + 480x4 x3 − (cos x) 1 + 40x4 x + 5x2 (ln x + 8x5 ) 2 2 cos x + 3 + 480x6 + 480x4 − − 40x4 cos x + 5x2 ln x + 40x7 . xx x 6. Ly = 4x2 D[sin2 (x2 + 1)] = 16x3 sin(x2 + 1) cos(x2 + 1). 7. P (r) = r3 + 3r2 − 4 = (r − 1)(r + 2)2 = 0 =⇒ r = 1, r = −2 (multiplicity 2). General solution: y (x) = c1 ex + c2 e−2x + c3 xe−2x . 8. P (r) = r3 + 11r2 + 36r + 26 = (r + 1)(r2 + 10r + 26) = 0 =⇒ r = −1, r = −5 + i. General solution: y (x) = c1 e−x + c2 e−5x (c2 cos x + c3 sin x). 547 9. P (r) = r4 + 13r2 + 36 = (r2 + 4)(r2 + 9) =⇒ r = ±2i, r = ±3i. General solution: y (x) = c1 cos 2x + c2 sin 2x + c3 cos 3x + c4 sin 3x. 10. P (r) = r3 + 10r2 + 25r = r(r + 5)2 =⇒ r = 0, r = −5 (multiplicity 2). General solution: y (x) = c1 + c2 e−5x + c3 xe−5x . 11. P (r) = (r + 3)2 (r2 − 4r + 13) =⇒ r = −3 (multiplicity 2), r = 2 ± 3i. General solution: y (x) = c1 e−3x + c2 xe−3x + c3 x2 e−3x + e2x (c4 cos 3x + c5 sin 3x). 12. P (r) = (r2 − 2r + 2)3 =⇒ r = 1 ± i (multiplicity 3). General solution: y (x) = ex [c1 cos x + c2 sin x + x(c3 cos x + c4 sin x) + x2 (c5 cos x + c6 sin x)]. 13. P (r) = (r2 + 4r + 4)(r − 3) = (r + 2)2 (r − 3) =⇒ r = −2 (multiplicity 2), r = 3. General solution: y (x) = c1 e−2x + c2 xe−2x + c3 e3x . 14. A(D) = D2 (D + 1). 15. A(D) = D2 − 6D + 10. 16. A(D) = (D2 + 16)6 . 17. A(D) = (D2 + 1)2 (D + 2). 18. In operator form the given diﬀerential equation is (D2 + 6D + 9)y = 4e−3x , that is (D + 3)2 y = 4e−3x . Therefore the complementary function is yc (x) = c1 e−3x + c2 xe−3x . The annihilator of F (x) = 4e−3x is A(D) = D + 3. Operating on (0.0.22) with D + 3 yields (D + 3)3 y = 0 which has general solution y (x) = yc (x) + A0 x2 e−3x . Consequently, an appropriate trial solution for (0.0.22) is yp (x) = A0 x2 e−3x . (0.0.22) 548 19. In operator form the given diﬀerential equation is (D2 + 6D + 9)y = 4e−2x , that is (D + 3)2 y = 4e−2x . (0.0.23) Therefore the complementary function is yc (x) = c1 e−3x + c2 xe−3x . The annihilator of F (x) = e−2x is A(D) = D + 2. Operating on (0.0.23) with D + 2 yields (D + 2)(D + 3)2 y = 0 which has general solution y (x) = yc (x) + A0 e−2x . Consequently, an appropriate trial solution for (0.0.23) is yp (x) = A0 e−2x . 20. In operator form the given diﬀerential equation is (D3 − 6D2 + 25D)y = x2 , that is D(D2 − 6D + 25)y = x2 . (0.0.24) Therefore the complementary function is yc (x) = e3x (c1 cos 4x + c2 sin 4x) + c3 . The annihilator of F (x) = x2 is A(D) = D3 . Operating on (0.0.24) with D3 yields D4 (D2 − 6D + 25)y = 0 which has general solution y (x) = yc (x) + A0 x + A1 x2 + A2 x3 . Consequently, an appropriate trial solution for (0.0.24) is yp (x) = A0 x + A1 x2 + A2 x3 . 21. In operator form the given diﬀerential equation is (D3 − 6D2 + 25D)y = sin 4x, that is D(D2 − 6D + 25)y = sin 4x. Therefore the complementary function is yc (x) = e3x (c1 cos 4x + c2 sin 4x) + c3 . (0.0.25) 549 The annihilator of F (x) = sin 4x is A(D) = D2 + 16. Operating on (0.0.25) with D2 + 16 yields (D2 + 16)D(D2 − 6D + 25)y = 0 which has general solution y (x) = yc (x) + A0 cos 4x + A1 sin 4x. Consequently, an appropriate trial solution for (0.0.25) is yp (x) = A0 cos 4x + A1 sin 4x. 22. In operator form the given diﬀerential equation is (D3 + 9D2 + 24D + 16)y = 8e−x + 1, that is (D + 1)(D + 4)2 y = 8e−x + 1. (0.0.26) Therefore the complementary function is yc (x) = c1 e−4x + c2 xe−4x + c3 e−x . The annihilator of F (x) = 8e−x + 1 is A(D) = D(D + 1). Operating on (0.0.26) with D(D + 1) yields D(D + 1)2 (D + 4)2 y = 0 which has general solution y (x) = yc (x) + A0 + A1 xe−x . Consequently, an appropriate trial solution for (0.0.26) is yp (x) = A0 + A1 xe−x . 23. In operator form the given diﬀerential equation is (D6 + 3D4 + 3D2 + 1)y = 2 sin x, that is (D2 + 1)3 y = 2 sin x. Therefore the complementary function is yc (x) = c1 cos x + c2 sin x + x(c3 cos x + c4 sin x) + x2 (c5 cos x + c6 sin x). The annihilator of F (x) = 2 sin x is A(D) = D2 + 1. Operating on (0.0.27) with D2 + 1 yields (D2 + 1)4 y = 0 which has general solution y (x) = yc (x) + x3 (A0 cos x + A1 sin x). Consequently, an appropriate trial solution for (0.0.27) is yp (x) = x3 (A0 cos x + A1 sin x). (0.0.27) 550 24. (a) From Problem 19 the complementary function for the given diﬀerential equation is yc (x) = c1 e−3x + c2 xe−3x , and an appropriate trial solution is yp (x) = A0 e−2x . Inserting this expression for yp (x) into the given diﬀerential equation yields e−2x (4A0 − 12A0 + 9A0 ) = 4e−2x , so that yp (x) = 4e−2x . Consequently, the general solution to the given diﬀerential equation is y (x) = c1 e−3x + c2 xe−3x + 4e−2x . (b) Choosing y1 (x) = e−3x , y2 (x) = xe−3x , we have W [y1 , y2 ](x) = e−6x . Hence a particular solution to the given diﬀerential equation is xe−3x · 4e−2x dx + xe−3x e−6x yp (x) = −e−3x e−3x · 4e−2x dx = 4e−2x . e−6x Therefore the diﬀerential equation has general solution y (x) = c1 e−3x + c2 xe−3x + 4e−2x . 25. (a) From Problem 20 the complementary function for the given diﬀerential equation is yc (x) = e3x (c1 cos 4x + c2 sin 4x) + c3 , and an appropriate trial solution is yp (x) = A0 x + A1 x2 + A2 x3 . Inserting this expression for yp (x) into the given diﬀerential equation yields 6A2 − 6(2A1 + 6A2 x) + 25(A0 + 2A1 x + 3A2 x2 ) = x2 , so that A0 , A1 , A2 must satisfy 25A0 − 12A1 + 6A2 = 0, 50A1 − 36A2 = 0, 75A2 = 1. Hence, A0 = 22 , 15625 A1 = 6 , 625 A2 = 1 , 75 so that 62 1 22 x+ x + x2 . 15625 625 75 Consequently, the general solution to the given diﬀerential equation is yp (x) = y (x) = e3x (c1 cos 4x + c2 sin 4x) + c3 + 22 62 1 x+ x + x2 . 15625 625 75 551 (b) Choosing y1 (x) = 1, y2 (x) = e3x cos 4x, y3 (x) = e3x sin 4x, we have 1 e3x cos 4x e3x sin 4x 3x 3x e (3 cos 4x − 4 sin 4x) e (3 sin 4x + 4 cos 4x = 100e6x . W [y1 , y2 , y3 ](x) = 0 3x 3x 0 e (−7 cos 4x − 24 sin 4x) e (−7 sin 4x + 24 cos 4x) Then a particular solution to the given diﬀerential equation is yp (x) = u1 + u2 e3x cos 4x + u3 e3x sin 4x where u1 + e3x cos 4x u2 + e3x sin 4x u3 3x e (3 cos 4x − 4 sin 4x)u2 3x = 0, 3x + e (3 sin 4x + 4 cos 4x)u3 = 0, 3x e (−7 cos 4x − 24 sin 4x)u2 + e (−7 sin 4x + 24 cos 4x)u3 = x2 . Solving this system yields: u1 = 12 x, 25 u2 = − 1 −3x 2 e x (3 sin 4x + 4 cos 4x), 100 u3 = 1 −3x 2 e x (3 cos 4x − 4 sin 4x). 100 Hence, yp (x) = 13 1 3x x− e cos 4x 75 100 e−3x x2 (3 sin 4x + 4 cos 4x)dx + = 1 3x e sin 4x 100 e−3x x2 (3 cos 4x − 4 sin 4x)dx 13 62 22 168 x+ x+ x− , 75 625 15625 390625 and the general solution to the given diﬀerential equation is y (x) = e3x (c1 cos 4x + c2 sin 4x) + c3 + 62 22 13 x+ x+ x. 75 625 15625 26. (a) From Problem 21 the complementary function for the given diﬀerential equation is yc (x) = e3x (c1 cos 4x + c2 sin 4x) + c3 , and an appropriate trial solution is yp (x) = A0 cos 4x + B0 sin 4x. Inserting this expression for yp (x) into the given diﬀerential equation yields 64A0 sin 4x − 64B0 cos 4x − 6(−16A0 cos 4x − 16B0 sin 4x) + 25(−4A0 sin 4x + 4B0 cos 4x) = sin 4x, that is (−36A0 + 96B0 ) sin 4x + (96A0 + 36B0 ) cos 4x = sin 4x 552 so that A0 , and B0 must satisfy −36A0 + 96B0 = 1, 96A0 + 36B0 = 0. Hence, A0 = − 1 , 292 B0 = 2 , 219 so that 1 2 cos 4x + sin 4x. 292 219 Consequently, the general solution to the given diﬀerential equation is yp (x) = − y (x) = e3x (c1 cos 4x + c2 sin 4x) + c3 − 1 2 cos 4x + sin 4x. 292 219 (b) Choosing y1 (x) = e3x cos 4x, y2 (x) = e3x sin 4x, y3 (x) = 1, a particular solution to the given diﬀerential equation is yp (x) = e3x cos 4x · u1 + e3x sin 4x · u2 + u3 (0.0.28) where e3x cos 4x · u1 + e3x sin 4x · u2 3x + u3 = 0 3x =0 3x = sin 4x. e (3 cos 4x − 4 sin 4x)u1 + e (3 sin 4x + 4 cos 4x)u2 3x e (−7 cos 4x − 24 sin 4x)u1 + e (−7 sin 4x + 24 cos 4x)u2 Solving this system using Cramer’s rule yields: 0 e3x sin 4x 1 3x 0 e (3 sin 4x + 4 cos 4x) 0 sin 4x e3x (−7 sin 4x + 24 cos 4x) 0 u1 = e3x cos 4x e3x sin 4x 1 3x 3x e (3 cos 4x − 4 sin 4x) e (3 sin 4x + 4 cos 4x) 0 e3x (−7 cos 4x − 24 sin 4x) e3x (−7 sin 4x + 24 cos 4x) 0 =− 1 −3x 1 −3x 3 e (3 sin2 4x + 4 cos 4x sin 4x) = − e (1 − cos 8x) + 2 sin 8x , 100 100 2 e3x cos 4x 0 1 e (3 cos 4x − 4 sin 4x) 0 0 e3x (−7 cos 4x − 24 sin 4x) sin 4x 0 u2 = e3x cos 4x e3x sin 4x 1 3x 3x e (3 cos 4x − 4 sin 4x) e (3 sin 4x + 4 cos 4x) 0 e3x (−7 cos 4x − 24 sin 4x) e3x (−7 sin 4x + 24 cos 4x) 0 3x = 1 −3x 3 1 −3x e (3 sin 4x cos 4x − 4 sin2 4x) = e sin 8x + 2(cos 8x − 1) , 100 100 2 e3x cos 4x e3x sin 4x 0 3x e (3 cos 4x − 4 sin 4x) e (3 sin 4x + 4 cos 4x) 0 e3x (−7 cos 4x − 24 sin 4x) e3x (−7 sin 4x + 24 cos 4x) sin 4x 1 = u3 = sin 4x. 3x 3x 25 e cos 4x e sin 4x 1 e3x (3 cos 4x − 4 sin 4x) e3x (3 sin 4x + 4 cos 4x) 0 3x 3x e (−7 cos 4x − 24 sin 4x) e (−7 sin 4x + 24 cos 4x) 0 3x 553 Integrating these expressions and substituting into (0.0.28) yields yp (x) = 2 1 sin 4x − cos 4x, 219 292 so that the general solution to the given diﬀerential equation is y (x) = e3x (c1 cos 4x + c2 sin 4x) + c3 + 2 1 sin 4x − cos 4x. 219 292 27. (a) In operator form the given diﬀerential equation is (D2 − 4)y = 5ex that is (D − 2)(D + 2)y = 5ex . (0.0.29) Therefore the complementary function is yc (x) = c1 e2x + c2 e−2x . The annihilator of F (x) = 5ex is A(D) = D − 1. Operating on (0.0.29) with D − 1 yields (D − 1)(D − 2)(D + 2)y = 0 which has general solution y (x) = yc (x) + A0 ex . Consequently, an appropriate trial solution for (0.0.29) is yp (x) = A0 ex . Inserting this expression for yp (x) into the given diﬀerential equation yields A0 ex (1 − 4) = 5ex 5 so that A0 = − 3 . Hence, 5 yp (x) = − ex , 3 and the general solution to the given diﬀerential equation is 5 y (x) = c1 e2x + c2 e−2x − ex . 3 (b) Choosing y1 (x) = e2x , y2 (x) = e−2x , we have W [y1 , y2 ](x) = −4. Hence a particular solution to the given diﬀerential equation is yp (x) = −e2x e−2x · 5ex dx + e−2x −4 e2x · 5ex 5 dx = − ex . −4 3 Therefore the diﬀerential equation has general solution 5 y (x) = c1 e2x + c2 e−2x − ex . 3 554 28. (a) In operator form the given diﬀerential equation is (D2 + 2D + 1)y = 2xe−x that is (D + 1)2 y = 2xe−x . (0.0.30) Therefore the complementary function is yc (x) = c1 e−x + c2 xe−x . The annihilator of F (x) = 2xe−x is A(D) = (D + 1)2 . Operating on (0.0.30) with (D + 1)2 yields (D + 1)4 y = 0 which has general solution y (x) = yc (x) + A0 x2 e−x + A1 x3 e−x . Consequently, an appropriate trial solution for (0.0.30) is yp (x) = e−x (A0 x2 + A1 x3 ). Diﬀerentiating this trial solution with respect to x yields yp (x) = e−x (−A0 x2 − A1 x3 + 2A0 x + 3A1 x2 ), yp (x) = e−x (A0 x2 + A1 x3 − 4A0 x − 6A1 x2 − +2A0 + 6A1 x). Inserting these results into the given diﬀerential equation yields e−x (A0 x2 + A1 x3 − 4A0 x − 6A1 x2 + 2A0 + 6A1 x) + 2e−x (−A0 x2 − A1 x3 + 2A0 x + 3A1 x2 ) + e−x (A0 x2 + A1 x3 ) = 2xe−x , that is, 2A0 + 6A1 x = 2x. Consequently, A0 = 0, A1 = 1 . 3 Hence, 1 3 −x xe , 3 and the general solution to the given diﬀerential equation is yp (x) = 1 y (x) = c1 e−x + c2 xe−x + x3 e−x . 3 (b) Choosing y1 (x) = e−x , y2 (x) = xe−x , we have W [y1 , y2 ](x) = e−2x . Hence a particular solution to the given diﬀerential equation is yp (x) = −e−x xe−x · 2xe−x dx + xe−x e−2x e−x · 2xe−x 1 dx = x3 e−x . e−2x 3 555 Therefore the given diﬀerential equation has general solution 1 y (x) = c1 e−x + c2 xe−x + x3 e−x . 3 29. (a) In operator form the given diﬀerential equation is (D2 − 1)y = 4ex that is (D − 1)(D + 1)y = 4ex . (0.0.31) Therefore the complementary function is yc (x) = c1 ex + c2 e−x . The annihilator of F (x) = 4ex is A(D) = D − 1. Operating on (0.0.31) with D − 1 yields (D − 1)2 (D + 1)y = 0 which has general solution y (x) = yc (x) + A0 xex . Consequently, an appropriate trial solution for (0.0.31) is yp (x) = A0 xex . Diﬀerentiating this trial solution with respect to x yields yp (x) = A0 ex (x + 1), yp (x) = A0 ex (x + 2). Inserting these expressions into the given diﬀerential equation yields A0 ex (x + 2) − A0 xex = 4ex , so that A0 = 2. Hence, yp (x) = 2xex , and the general solution to the given diﬀerential equation is y (x) = c1 ex + c2 e−x + 2xex . (b) Choosing y1 (x) = ex , y2 (x) = e−x , we have W [y1 , y2 ](x) = −2. Hence a particular solution to the given diﬀerential equation is yp (x) = −ex e−x · 4ex dx + e−x −2 ex · 4ex dx = 2xex − ex . −2 The last term in this expression for yp (x) can be omitted since it is part of the complementary function. Consequently, the given diﬀerential equation has general solution y (x) = c1 ex + c2 e−x + 2xex . 556 30. The given diﬀerential equation does not have constant coeﬃcients, and therefore the annihilator method cannot be applied. 31. The nonhomogeneous term F (x) = ln x cannot be annihilated, and therefore the annihilator method cannot be applied. 32. In operator form the given diﬀerential equation is (D2 + 2D − 3)y = 5ex that is (D + 3)(D 1)y = 5ex . (0.0.32) Therefore the complementary function is yc (x) = c1 ex + c2 e−3x . The annihilator of F (x) = 5ex is A(D) = D − 1. Operating on (0.0.32) with D − 1 yields (D − 1)2 (D + 3)y = 0 which has general solution y (x) = yc (x) + A0 xex . Consequently, an appropriate trial solution for (0.0.32) is yp (x) = A0 xex . 33. The nonhomogeneous term F (x) = tan x cannot be annihilated, and therefore the annihilator method cannot be applied. 34. In operator form the given diﬀerential equation is (D2 + 1)y = 4 cos 2x + 3ex . (0.0.33) Therefore the complementary function is yc (x) = c1 cos x + c2 sin x. The annihilator of F (x) = 4 cos 2x+3ex is A(D) = (D2 +4)(D −1). Operating on (0.0.33) with (D2 +4)(D −1) yields (D2 + 4)(D − 1)(D2 + 1)y = 0 which has general solution y (x) = yc (x) + A0 cos 2x + A1 sin 2x + A2 ex . Consequently, an appropriate trial solution for (0.0.33) is yp (x) = A0 cos 2x + A1 sin 2x + A2 ex . 35. In operator form the given diﬀerential equation is (D2 − 8D + 16)y = 7e4x , 557 that is (D − 4)2 y = 7e4x . (0.0.34) Therefore the complementary function is yc (x) = c1 e4x + c2 xe4x . The annihilator of F (x) = 7e4x is A(D) = D − 4. Operating on (0.0.34) with D − 4 yields (D − 4)3 y = 0 which has general solution y (x) = yc (x) + A0 x2 e4x . Consequently, an appropriate trial solution for (0.0.34) is yp (x) = A0 x2 e4x . 36. The diﬀerential equation does not have constant coeﬃcients and therefore the annihilator method cannot be applied. 37. In operator form the given diﬀerential equation is (D2 − 2D + 5)y = 7ex cos x + sin x. (0.0.35) Therefore the complementary function is yc (x) = c1 ex (c1 cos 2x + c2 sin 2x). The annihilator of F (x) = 7ex cos x + sin x is A(D) = (D2 − 2D + 2)(D2 + 1). Operating on (0.0.35) with (D2 − 2D + 2)(D2 + 1) yields (D2 − 2D + 2)(D2 + 1)(D2 − 2D + 5)y = 0 which has general solution y (x) = yc (x) + ex (A0 cos x + A1 sin x) + A2 cos x + A3 sin x. Consequently, an appropriate trial solution for (0.0.35) is yp (x) = ex (A0 cos x + A1 sin x) + A2 cos x + A3 sin x. 38. In operator form the given diﬀerential equation is (D2 + 4)y = 7 cos2 x 1 that is, using the trigonometric identity cos2 x = 2 (cos 2x + 1) (D2 + 4)y = 7 (cos 2x + 1). 2 Therefore the complementary function is yc (x) = c1 cos 2x + c2 sin 2x. (0.0.36) 558 7 The annihilator of F (x) = 2 (cos 2x + 1) is A(D) = D(D2 + 4). Operating on (0.0.36) with D(D2 + 4) yields D(D2 + 4)2 y = 0 which has general solution y (x) = yc (x) + A0 + x(A1 cos 2x + A2 sin 2x). Consequently, an appropriate trial solution for (0.0.36) is yp (x) = A0 + x(A1 cos 2x + A2 sin 2x). 39. In operator form the given diﬀerential equation is (D2 − 2aD + a2 + b2 )y = eat (4t + cos bt). (0.0.37) Therefore the complementary function is yc (t) = eat (c1 cos bt + c2 sin bt). The annihilator of F (t) = eat (4t + cos bt) is A(D) = (D − a)2 (D2 − 2aD + a2 + b2 ). Operating on (0.0.37) with (D − a)2 (D2 − 2aD + a2 + b2 ) yields (D − a)2 (D2 − 2aD + a2 + b2 )2 y = 0 which has general solution y (t) = yc (t) + eat (A0 + A1 t) + teat (A2 cos bt + A3 sin bt). Consequently, an appropriate trial solution for (0.0.37) is yp (t) = eat (A0 + A1 t) + teat (A2 cos bt + A3 sin bt). 40. In operator form the given diﬀerential equation is (D2 + 4)y = 7ex . Therefore the complementary function is yc (x) = c1 cos 2x + c2 sin 2x. The annihilator of F (x) = 7ex is A(D) = D − 1. Operating on (0.0.38) with D − 1 yields (D − 1)(D2 + 1)y = 0 which has general solution y (x) = yc (x) + A0 ex . Consequently, an appropriate trial solution for (0.0.38) is yp (x) = A0 ex . (0.0.38) 559 Inserting this expression for yp into the given diﬀerential equation yields 5A0 ex = 7ex , 7 so that A0 = 5 . Hence, 7x e, 5 and the general solution to the given diﬀerential equation is yp (x) = 7 y (x) = c1 cos 2x + c2 sin 2x + ex . 5 41. In operator form the given diﬀerential equation is (D2 + 2D − 3)y = 2xe−3x that is (D + 3)(D − 1)y = 2xe−3x . Therefore the complementary function is yc (x) = c1 e−3x + c2 ex . The annihilator of F (x) = 2xe−3x is A(D) = (D + 3)2 . Operating on (0.0.39) with (D + 3)2 yields (D + 3)3 (D − 1)y = 0 which has general solution y (x) = yc (x) + e−3x (A0 x + A1 x2 ). Consequently, an appropriate trial solution for (0.0.39) is yp (x) = e−3x (A0 x + A1 x2 ). Diﬀerentiating this trial solution with respect to x yields yp (x) = e−3x (−3A0 x − 3A1 x2 + A0 + 2A1 x), yp (x) = e−3x (9A0 x + 9A1 x2 − 3A0 − 6A1 x − 3A0 − 6A1 x + 2A1 ). Inserting these expressions into the given diﬀerential equation yields e−3x (9A0 x + 9A1 x2 − 6A0 − 12A1 x + 2A1 ) + 2e−3x (−3A0 x − 3A1 x2 + A0 + 2A1 x) − 3e−3x (A0 x + A1 x2 ) = 2xe−3x , which simpliﬁes to −4A0 + 2A1 − 8A1 x = 2x. Therefore A0 and A1 must satisfy −4A0 + 2A1 = 0, so that A0 = 1 −8 and A1 = −1. 4 −8A1 = 2, Hence, 1 1 yp (x) = e−3x − x − x2 8 4 1 = − xe−3x (2x + 1), 8 (0.0.39) 560 and the general solution to the given diﬀerential equation is 1 y (x) = c1 e−3x + c2 ex − xe−3x (2x + 1). 8 42. In operator form the given diﬀerential equation is (D2 + 4D)y = 4x2 that is D(D + 4)y = 4x2 . (0.0.40) Therefore the complementary function is yc (x) = c1 + c2 e−4x . The annihilator of F (x) = 4x2 is A(D) = D3 . Operating on (0.0.40) with D3 yields D4 (D + 4)y = 0 which has general solution y (x) = yc (x) + A1 x + A2 x2 + A3 x3 . Consequently, an appropriate trial solution for (0.0.40) is yp (x) = A1 x + A2 x2 + A3 x3 . Diﬀerentiating this trial solution with respect to x yields yp (x) = A1 + 2A2 x + 3A3 x2 , yp (x) = 2A2 + 6A3 x. Inserting these expressions into the given diﬀerential equation yields 2A2 + 6A3 x + 4(A1 + 2A2 x + 3A3 x2 ) = 4x2 , that is, A2 + 2A1 + x(3A3 + 4A2 ) + 6A3 x2 = 2x2 . Therefore A1 , A2 , and A3 must satisfy A2 + 2A1 = 0, 3A3 + 4A2 = 0, 6A3 = 2, 1 1 so that A1 = 8 , A2 = − 1 , and A3 = 3 . Hence, 4 yp (x) = 1 1 1 1 x − x2 + x3 = x(3 − 6x + 8x2 ), 8 4 3 24 and the general solution to the given diﬀerential equation is y (x) = c1 + c2 e−4x + 1 x(3 − 6x + 8x2 ). 24 43. In operator form the given diﬀerential equation is (D2 + 4)y = 8 cos 2x. (0.0.41) 561 Therefore the complementary function is yc (x) = c1 cos 2x + c2 sin 2x. The annihilator of F (x) = 8 cos 2x is A(D) = D2 + 4. Operating on (0.0.41) with D2 + 4 yields (D2 + 4)2 y = 0 which has general solution y (x) = yc (x) + x(A0 cos 2x + A1 sin 2x). Consequently, an appropriate trial solution for (0.0.41) is yp (x) = x(A0 cos 2x + A1 sin 2x). Diﬀerentiating this trial solution with respect to x yields yp (x) = A0 cos 2x + A1 sin 2x + x(−2A0 sin 2x + A1 cos 2x), yp (x) = −4A0 sin 2x + 4A1 cos 2x + x(−4A0 cos 2x − A1 sin 2x). Inserting these expressions into the given diﬀerential equation yields −4A0 sin 2x + 4A1 cos 2x + x(−4A0 cos 2x − A1 sin 2x) + 4x(A0 cos 2x + A1 sin 2x) = 8 cos 2x, that is, −4A0 sin 2x + 4A1 cos 2x = 8 cos 2x. Consequently, A0 = 0, and A1 = 2. Therefore, yp (x) = 2x sin 2x, and the general solution to the given diﬀerential equation is y (x) = c1 cos 2x + c2 sin 2x + 2x sin 2x. 44. In operator form the given diﬀerential equation is (D2 − 8D + 16)y = 5e4x , that is (D − 4)2 y = 5e4x . Therefore the complementary function is yc (x) = c1 e4x + c2 xe4x . The annihilator of F (x) = 5e4x is A(D) = D − 4. Operating on (0.0.42) with D − 4 yields (D − 4)3 y = 0 which has general solution y (x) = yc (x) + A0 x2 e4x . (0.0.42) 562 Consequently, an appropriate trial solution for (0.0.42) is yp (x) = A0 x2 e4x . Diﬀerentiating this trial solution with respect to x yields yp (x) = A0 e4x (4x2 + 2x), yp (x) = A0 e4x (16x2 + 16x + 2). Inserting these expressions into the