This preview shows page 1. Sign up to view the full content.
Unformatted text preview: c 2004 David Russell Luke MATH351: Engineering Mathematics I (Autumn 2004) D. Russell Luke Lecture notes to follow Advanced Engineering Mathematics, 2nd Ed. by Michael Greenberg University of Delaware Autumn 2004 Department: Mathematics 2 TABLE OF CONTENTS
List of Figures List of Tables Chapter 1: Introduction to Diﬀerential Equations 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Deﬁnitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 2: First Order Diﬀerential Equations 2.1 Introduction . . . . . . . . . . . . . . . . . . . . 2.2 Linear Equations . . . . . . . . . . . . . . . . . 2.3 Examples and Applications . . . . . . . . . . . 2.4 Separation of Variables . . . . . . . . . . . . . . 2.5 Exact equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3 1 1 2 6 6 6 10 10 12 14 14 15 17 20 21 Chapter 3: Second Order Diﬀerential Equations 3.1 Linear Dependence and Linear Independence . . . . . . . . . . . . . . 3.2 nth Order Linear Homogeneous Equations: the general solution . . . 3.3 Solution of the Homogeneous Equation: constant coeﬃcients . . . . . 3.4 A Scandalously Terse Treatment of the Harmonic Oscillator . . . . . 3.5 Solution of the nth Order Inhomogeneous Equation with constant coeﬃcients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 LIST OF FIGURES 2 LIST OF TABLES 3 1 Chapter 1 INTRODUCTION TO DIFFERENTIAL EQUATIONS
1.1 Introduction 1. Newton’s second law of motion: acceleration = Force/mass m d2 x = F (t) where x(t) = spatial location and F (t) = force. (1.1) dt2 Integrate once to get: 2. Newton’s First Law of Motion: “An object in motion tends to stay in motion unless acted on by an outside force” (inertia). m Integrate once to get: 3. Ballistic path 1 mx = F0 t2 + At + B. 2 It’s not always so easy to solve diﬀeq’s: Example 1.1.1 m Formally, integration yields dx m +k dt x(t) dt = F (t) dt + A. d2 x = −kx(t) + F (t). dt2 (1.3) dx = F0 t + A. dt (1.2) 2 1.2 Deﬁnitions • ODE’s and PDE’s • Systems of DE’s • Operator notation • Order • Solution (on a domain) • existence • uniqueness • (non)linear diﬀ eq’s • (non)homogeneous diﬀ eq’s • Initial and boundaryvalue problems Deﬁnition 1.2.1 Ordinary Diﬀerential Equations: “one dimensional” diﬀerential equations. Example 1.2.2 (ODE’s) 1. m=mass, x=displacement, k=spring stiﬀness, F=applied force: m d2 x + kx = F (t), dt2 (Mass/Spring with restoring force). (1.4) 2. i=current, L=inductance, C=capacitance, E=applied voltage: L d2 i 1 dE + i= , 2 dt C dt (Electrical Circuit). (1.5) 3. θ=angular motion, l=pendulum length, g=gravity: d2 θ g + sin θ = 0, dt2 l (Pendulum). (1.6) 3 4. x=population, c=birth/death rate: d2 x = cx, dt2 (Population growth). (1.7) 5. y=deﬂection, C=mass density constant: 2 d2 y dy =C 1+ , 2 dx dx2 (String equation). (1.8) 6. y=deﬂection, w=mass loading, E&I=beam material constants: EI 7. Systems of ODE’s: dx = −(a + b)x + cy + dz dt dy = ax − (c + e)y + f z dt dx = gx + ey − (d + h)z. dt d4 y = −w(x), dx4 (Beam equation). (1.9) (1.10) describes the reactions of 3 diﬀerent chemical compounds with diﬀerent rate constants a, b, c, d, e, f, g, h. Deﬁnition 1.2.3 Partial Diﬀerential Equations: “multi dimensional” diﬀerential equations. Example 1.2.4 (PDE’s) 1. u(x, t) = time varying heat distribution, α =diﬀusivity: α2 ∂2u ∂u = , 2 ∂x ∂t (Heat equation). (1.11) 2. u(x, y, z ) =steadystate temperature distribution: ∂2u ∂2u ∂2u + + = u = ∇ · ∇u = 0, ∂ x2 ∂ y 2 ∂ z 2 (Laplace’s Equation). (1.12) 4 3. u(x, y, t) =vibrating membrane: 2 ∂ u ∂2u ∂2u ∂2u 2 c + 2 − 2 = u − 2 = 0, ∂ x2 ∂ y ∂t ∂t ∂4u ∂4u ∂4u +2 2 2 + 4 =0 ∂ x4 ∂x ∂y ∂y 5. Systems of PDE’s: ∂ E2 ∂ E 1 − =0 ∂ x1 ∂ x2 ∂ E1 ∂ E2 σ (x) + = . ∂ x1 ∂ x2 (Wave Equation). (1.13) 4. u(x, y ) =stream function describing slow motion of viscus ﬂuids: (Biharmonic Equation). (1.14) (1.15) These are Maxwell’s equations describing the electric ﬁeld intensity, E = (E1 , E2 ) in the place x = (x1 , x2 ), with charge distribution density σ (x) and permittivity . Example 1.2.5 (Solutions, existence, uniqueness) Existence is something that mathematicians worry about when it is not clear how to actually solve the problem – it is often easier to show that a solution exists than to ﬁnd the solution. As an engineer, you should take the following (ad hoc) approach: if you ﬁnd a well deﬁned function that satisﬁes the DE, then the solution exists. y (x) = 4 cos x − x cos x solves y + y = 2 sin x Actually, any function of the form a sin x + b cos x − x cos x solves the above (inﬁnitely many solutions!). The initial/boundary conditions (see below) determinewhich solution is the one you need. CAUTION: though you can ﬁnd a function that satisﬁes the DE, it might not satisfy the initial/boundary conditions, in which case it could be that a solution does not exist. The above equations can look pretty intimidating at ﬁrst sight. It is sometimes helpful to look at diﬀerential equations in terms of operators, or functions of functions. For instance, the mass spring system above Eq.(1.4) in operator notation is d2 x + kx. (1.16) dt2 Solving diﬀerential equations is analogous to solving equations. You know that the quadratic formula yields the solution to D(x) = F (t), where D(x) = m f (x) = 4 where f (x) = x2 + 3x Solving a diﬀerential equation is the same type of thing, only now we are looking for the function x that solves Eq.(1.16) 5 Deﬁnition 1.2.6 A function or, more generally, an operator D is linear if 1. D(u + v ) = D(u) + D(v ) and 2. D(a · u) = aD(u) for some constant a. Example 1.2.7 ((non)linear, homogeneous diﬀ eq’s) The Schr¨dinger equation o describes the motion of particles in nonrelativistic quantum mechanics. • Linear diﬀerential eq’s: ihut + h2 uxx − V (x)u = 0 (linear, timedependent Schr¨dinger Eq.) (1.17) o 2m • Nonlinear diﬀerential eq’s iut + uxx − au + bu2 u = 0 (nonlinear, timedependent Schr¨dinger Eq.) o (1.18) Example 1.2.8 (Initial and boundaryvalue problems) 1. Initial value problem Find u(x, t) such that u ∂2u c − 2 =0 ∂ x2 ∂t
2∂ 2 u(x, 0) = f (x) and ∂u (x, 0) = g (x) ∂t 2. Boundary value problem Find u(x, t) such that ∂2u ∂2u − 2 =0 ∂ x2 ∂t u(0, t) = F (t) and c2 u(L, t) = G(t) 3. Initialboundary value problem Find u(x, t) such that c2 ∂2u ∂2u − 2 =0 ∂ x2 ∂t u(x, 0) = f (x) u(∞, t) = 0 and 6 Chapter 2 FIRST ORDER DIFFERENTIAL EQUATIONS
2.1 Introduction We consider general equations of the form a0 (x)y + a1 (x)y = f (x). (2.1) As long as the leading coeﬃcient a0 (x) is far away from zero, we can simplify this to the following y + p(x)y = q (x) (2.2) (here p = a1 /a0 and q = f /a0 ). Before getting to Eq.(2.2), let’s look at what happens if we are interested in regions of x where a0 (x) is really small compared to a1 (x). We write this as a0 (x) a1 (x) for all x on the interval [x0 , x1 ]. In this case we can write a1 (x)y ≈ a0 (x)y + a1 (x)y = f (x) which yields the approximate solution y (x) ≈ f (x)/a1 (x). This may seem like cheating, but it is a perfectly reasonable simpliﬁcation on the domain [x0 , x1 ]. Whenever possible, simplify the problem to one you can solve. 2.2 2.2.1 Linear Equations Linear Homogeneous Equations Let’s suppose a0 (x) and a1 (x) in Eq.(2.1) are about the same magnitude (i.e. we can’t neglect the ﬁrst derivative). Suppose further that f (x) = 0. The simplest ODEs to solve are ﬁrst order, linear homogeneous equations y + p(x)y = 0. Formally we proceed as follows: (2.3) 7 1. rearrange Eq.(2.3) to get dy = −p(x)y. dx 2. multiply both sides by dx/y to get dy = −p(x)dx y 3. integrate both sides: dy =− y p(x)dx 4. recall that dy = lny  − C y p(x)dx + C so that lny  = − 5. now exponentiate both sides to get y  = e− where A := eC .
p(x)dx+C = Ae− p(x)dx We have to be careful about a few things. First, we divided Eq.(2.3) by y , so we better check that the formal solution we found doesn’t pass through the origin. Since ec > 0 for all real c, then as long as p(x) is real and the integral is deﬁned, then this solution is formally okay. It deosn’t matter if the constant A is < 0 or > 0. Whichever side of zero A lies, y has the same sign for all x. We can therefore drop the absolute value to yield y (x) = Ae− p(x)dx (2.4) It doesn’t even matter if A = 0 since, in this case y = 0, which is also a solution to Eq.(2.3), known as the trivial solution. 2.2.2 Linear Inhomogeneous Equations We up the ante to equations of the form y (x) + p(x)y (x) = q (x). (2.5) 8 2.2.3 Variation of parameters Inveneted by Lagrange (17361813). The idea is to use the solution to the homogeneous equation to generate a solution to the inhomogeneous problem, also known as ﬁnding the particular solution. To motivate the technique of Lagrange, suppose the coeﬃcient A in Eq.(2.4) is actually a function of x rather than just a constant: y (x) = g (x)e−
p(x)dx . (2.6) This is the origin of the name variation of parameters: the leading coeﬃcient or parameter varies. Being careful to apply the product rule correctly we get
dy = g e− p(x)dx − gpe− p(x)dx . dx Lagrange (probably) just ﬁddled around, pluging y (x) = ge− p(x)dx into Eq.(2.5), to get − p(x)dx − p(x)dx y + p(x)y = g (x)e − g (x)p(x)e + p(x)g (x)e− p(x)dx = q (x). Note that the two terms involving g (x) cancel! So we have g (x)e−
p(x)dx = q (x) ⇐⇒ g (x) = q (x)e p(x)dx . Now, we simply integrate both sides: g (x) = q (x)e p(x)dx dx + A. Substitute this expression for g into Eq.(2.6) p(x)dx y (x) = q (x)e dx + A e− p(x)dx (2.7) Note that the solution to the homogeneous problem Eq.(2.3) is embedded in the solution to the inhomogeneous problem. Denote the homogeneous solution by yh (x) = Ae− Deﬁne the particular solution by yp (x) = e
− p(x)dx p(x)dx . q (x)e p(x)dx dx. The solution to the inhomogeneous problem can then be written as y (x) = yp (x) + yh (x) (2.8) 9 2.2.4 Integrating factors Leonhard Euler (17071783) had a diﬀerent method for solving DE’s which uses integrating factors. This technique is less systematic and more hitormiss than Lagrange’s variation of parameters, but when it works, it’s quite simple. The idea is to introduce a factor to y in Eq.(2.5) that “completes the derivative”, that is, ﬁnd σ (x) such that d(σ y ) σ y + σ py = . dx If such a σ does indeed exist, then Eq.(2.5) can be rewritten as d(σ y ) = σ (x)q (x). dx Now, to solve this, you just integrate: σ y = σ (x)q (x)dx + A, so that the general solution is 1 y= σ (x) σ (x)q (x)dx + A . (2.10) (2.9) Convince yourself that this is formally compatible with Eq.(2.7). 2.2.5 Exact solutions, initial conditions, and existence/uniqueness The general solution Eq.(2.7) isn’t very healpful for speciﬁc problems. We need more data, an initial condition, to tie down the exact solution. If we were to use conventional best practices for formulating ODE’s we would write Eq.(2.5) as y + p(x)y = q (x) y (a) = b. The exact solution then becomes x x x p(ξ )dξ y (x) = q (x)e a dx + b e− a p(ξ)dξ .
a (2.11) (2.12) (2.13) Check that y (a) = b. Now, existence of this solution depends on the existence of the integrals. Don’t take existence of the integrals for granted, this is not a trivial detail. 10 Standard uniqueness proof: Suppose a solution to Eq.(2.11)Eq.(2.12) exists, call it y1 . Suppose that there is another solution to Eq.(2.11)Eq.(2.12), call it y2 . From these we create a third function y3 = y1 − y2 which satisﬁes the homogeneous problem (convince yourself of this)
y3 + p(x)y3 = 0 y3 (a) = 0. (2.14) (2.15) Now, from Eq.(2.4) we know that the solution to Eq.(2.14)Eq.(2.15) is given by y3 (x) = Ae−
x
a p(ξ )dξ where A is chosen so that y3 (a) = 0, i.e. A = 0. But this means that y3 (x) = 0 for all x, which means that y2 = y1 for all x. So, y2 wasn’t anther distinct solution afterall. Thus y1 must be unique! ✷ 2.3 Examples and Applications We found the general solution for problem 2.2.2b in Greenberg by variation of parameters. We also used variation of parameters (rather hastily) to ﬁnd the exact solution to 2.2.6e. We also looked at problem 2.3.14, polution in a river. The solution to this problem is in the back of Greenberg. 2.4 Separation of Variables In the deﬁnition of linearity 1.2.6, we used the notion of operators. Another way to look at diﬀerential operators is the following general form F (x, y, y , . . . , y (n) ) = 0. We are concerned here with general ﬁrstorder ODE’s F (x, y, y ) = 0. We can also see from this whether or not the corresponding diﬀerential operator is linear by checking to make sure that F ’s action on y and y is linear. We limit this discussion to DE’s that allow us to push symbols around to get the y by itself on the lefthand side of the equation to yield y = f (x, y ). 11 It isn’t a given that you can get the equation into this form. For example F (x, y, y ) = sin(y ) + y − y − 10x2 + 2. In general, there are many more equations we can’t solve that those we can. We focus on the ones we can solve. If we can further express the equation as y = X (x)Y (y ), (2.16) then we say that the equation is separable. Divide both sides of Eq.(2.16) by Y (y ) (assuming this is legal!) to get 1 dy = X (x). Y (y ) dx Now integrate both sides (formally) with respect to x to get 1 dy dx = X (x) dx, Y (y ) dx or more simply dy = Y (y ) X (x) dx. (2.17) 1 As long as Y (y) is continuous in y over the relevant y interval, and X (x) is a continuous function of x over the relevant x interval, then the above integral exists and deﬁnes a general solution for Eq.(2.16). Note that we haven’t worried about the linearity of the diﬀerential equation with respect to y (it still has to be linear with respect to y ). As long as the equation is separable, this is not a problem. Examples done in class: 1. y − 3x2 e−y = 0; y (0) = 0, which has the solution y (x) = ln(x3 + 1). Note that the domain is (−1, ∞). 2. y = y 2 + 3y ; y (0) = −4. This one required partial fractions to evaluate the integral we get from Eq.(2.17). The solution is 12e3x y (x) = . 1 − 4e3x 12 3. y = Here we had to use an inspired trick: the change of variables v = y/x. This yields the separable equation v=
2v −1 v −2 2y − x . y − 2x −v x which is clearly separable. The solution is √ √ 2x ± 3x2 + C =⇒ y = 2x ± 3x2 + C. v= x Note that embedded in this general solution are 4 distinct solution curves. Which one we are interested in depends on the domain of interest and initial condition. 2.5 Given Exact equations dy −M (x, y ) = . dx N (x, y ) N (x, y )dy + M (x, y )dx = 0. What we are looking for in exact equations is a function F (x, y ) such that dF (x, y, ) = N (x, y )dy + M (x, y )dx (2.19) (2.18) Rewrite this as where dF (x, y, ) denotes the total derrivative. We can then solve Eq.(2.18) by simply integrating Eq.(2.19). Example 2.5.1 4 cos(2u) du − e−5v dv = 0; v (0) = −6
∂F ∂u Here M (u, v ) = 4 cos(2u) and N (u, v ) = e−5v . We can think of this as M (u, v ) = and N (u, v ) = ∂ F so that ∂v − 5v F (u, v ) = 4 cos(2u) du − e dv = 0 = const. Evaluating the integrals yields 1 F (u, v ) = 2 sin(2u) + e−5v = C. 5 13 −1 ln(5(C − 2 sin(2u))). 5 The initial condition v (0) = −6 implies that C = 1 e30 . 5 v= The test for determining when an equation is exact: ∂ M (x, y ) ∂ N (x, y ) = . ∂y ∂x Check that the above example satisﬁes this test. (2.20) Solving for v yields 14 Chapter 3 SECOND ORDER DIFFERENTIAL EQUATIONS
When you solve the equation a2 x 2 + a1 x + a0 = 0 you get two solutions, also called roots. Sometimes there is a double root as in the equation a2 x2 = 0. When you solve the equation a3 x 3 + a2 x 2 + a1 x + a0 = 0 you expect to ﬁnd three solutions. Again, you may ﬁnd multiple roots. Keep this analogy in mind when we explore higher order DE’s. In DE’s we are solving equations where the unknowns are functions. When we distinguish between “multiple roots” or “distinct roots” of a function equation we use fancy words like linear (in)dependence. I recommend that you review your high school matrix algebra (see appendix B of Greenberg). 3.1 Linear Dependence and Linear Independence Deﬁnition 3.1.1 A set of functions {u1 , . . . , un } is said to be linearly dependent with respect to an interval I if one of the functions can be expressed as a linear combination of the others, that is if there is some j ∈ {1, . . . , n} for which uj (x) =
n αi ui (x), i=1, i=j ∀ x ∈ I. (3.1) If there is no way to satisfy the above equation for any j , then the set of functions {u1 , . . . , un } is said to be linearly independent. Example 3.1.2 (Linear dependence) The set {cos x, ex , e−x , cosh x} is linearly dependent. Example 3.1.3 (Linear independence) The set {cos x, ex , e−x } is linearly independent. 15 Theorem 3.1.4 (Test of Linear (In)dependence) The set of functions {u1 , . . . , un } is linearly dependent if and only if there is a nontrivial solution (α1 , . . . , αn ) to
n i=1 αi ui (x) = 0. This condition is simple in principle, but diﬃcult to work with in practice. For practical situations one can make use of the Wronskian: Deﬁnition 3.1.5 The Wronskian determinant of the set of functions {u1 , . . . , un } is deﬁned by u1 (x) ··· un (x) u1 (x) ··· un (x) W [u1 , . . . , un ](x) := (3.2) , . . . . . . (n−1) (n−1) u1 (x) · · · un (x) assuming that u1 , . . . , un are n − 1 times diﬀerentiable. Theorem 3.1.6 (The Wronskian test for linear independence) Given the set of n−1 times diﬀerentiable functions {u1 , . . . , un } on some interval I , if W [u1 , . . . , un ](x) := 0 on I , then {u1 , . . . , un } are linearly independent on I . Note that the converse is not, in general, true. That is, it is not always true that if W [u1 , . . . , un ](x) := 0 on I , then {u1 , . . . , un } are linearly dependent on I . If we further assume that {u1 , . . . , un } solve an nth order linear homogeneous diﬀerential equation, then the Wronskian test is suﬃcient. This is the meaning of the following theorem: Theorem 3.1.7 (Necessary and Suﬃcient Wronskian Test) Given the set of functions {u1 , . . . , un } that satisfy the nth order linear, homogeneous ODE dn u dn−1 u du + pn−1 (x) n−1 + · · · + p1 (x) + p0 (x)u = 0, n dx dx dx where the coeﬃcients pj (x) are continuous on the interval I . THEN, W [u1 , . . . , un ](x) := 0 if and only if the set {u1 , . . . , un } is linearly dependent on I . 3.2 nth Order Linear Homogeneous Equations: the general solution y (x) + p(x)y (x) = 0 has the general solution y (x) = Ae−
p(x)dx Recall that x∈I 16 for p(x) continuous on I . If, in addition, we have the initial data y (a) = b for a ∈ I , then the exact or particular solution is y (x) = be−
x
a p(ξ )dξ x ∈ I. and, for some a ∈ I , In this section we look at the more general nth order linear homogeneous ODE: n j d pj (x) j y (x) = 0 where pn (x) := 1, (3.3) dx j =0 y (a) = b1 ; y (a) = b2 ; . . . ; y (n−1) (a) = bn ;
(3.4) The next theorem guarantees that solutions to the above initial value problem exist and are unique. Theorem 3.2.1 (existence and uniqueness) If pj (x) is continuous for all j = 1, 2, . . . , n on I , then there exists a unique solution to the initial value problem Eq.(3.3)(3.4). Using operator notation we denote n j d L(·) = pj (x) j (·). dx j =1 This operator is linear (prove it!). A key property of linear operators that you will see over and over is the principle of superpositoin of solutions. Theorem 3.2.2 (superposition of solutions) If y1 , y2 , . . . , yn are solutions to Eq.(3.3), then so is any linear combination of these, that is, any function of the form v (x) = Proof. Since L is a linear operator, L(v (x)) =
n j =1 n j =1 cj yj (x). cj L(yj (x)). ✷ But, by assumption, L(yj (x)) = 0 for all j , hence L(v (x)) = 0. Careful. The above theorem only applies to linear homogeneous ODE’s. Just as with algebraic equations, where an nth degree polynomial has at most n distinct roots, the next result tells us that an nth degree linear homogeneous ODE has exactly n linearly independent solutions. Note that the latter result for ODE’s is much stronger than the analogous conventional facts from algebraic equations. This is kind of amazing. 17 Theorem 3.2.3 (general solutions to nth order linear homogeneous ODE’s) Let p1 (x), p2 (x), . . . pn (x) be continuous on the open interval I . The nth oder linear homogeneous ODE Eq.(3.3) has n, and only n, linearly independent solutions on I . Moreover, any function of the form v (x) = also solves Eq.(3.3). Proof. . This in the book. Check it out. ✷
n j =1 cj yj (x) (3.5) The n linearly independent solutions to Eq.(3.3) are referred to as a basis or fundamental set of solutions to Eq.(3.3). 3.3 Solution of the Homogeneous Equation: constant coeﬃcients In this secton we detail a method for solving equatons of the form n dj L(y (x)) = aj j y (x) = 0 where an = 1. dx j =0 (3.6) Notice that this is an nth order, linear homogeneous ODE with constant coeﬃcients. We’ll start oﬀ with the case n = 2 to see the pattern, and then use induction to get the general form. Suppose there is a solution of the form y (x) = eλx . Plugging this into Eq.(3.6) with n = 2 we get L(eλx ) = λ2 eλx + a1 λeλx + a0 eλx = 0. More suggestively, we have λ2 + a1 λ + a0 eλx = 0. λ2 + a1 λ + a0 = 0, that is, λ= −a1 ± a2 − 4a0 1 . 2 Since eλx is only zero in the limit as λx → −∞ it must be that (3.7) (3.8) 18 Equation Eq.(3.7) is called the characteristic equation. Theorem 3.2.3 guarantees that there are 2 and only 2 linearly independent solutions. This is clear from Eq.(3.8) if a2 − 4a0 = 0. 1 Supposing this is the case, then the two linearly independent solutions to Eq.(3.6) with n = 2 are {eλ1 x , eλ2 x } where λ1 and λ2 are the (distinct) roots of Eq.(3.7). The general solution, then, is of the form y (x) = C1 eλ1 x + C2 eλ2 x (3.9) Fact: the set of functions {eλ1 x , . . . , eλn x } is linearly independent if and only if λi = λj for all i = j . The λj ’s can be real or complex. 3.3.1 Higher order equations We won’t prove it, but the following should seem plausible: If {λ1 , . . . , λn } are distinct roots of the nth degree characteristic polynomial λn + then the general solution to Eq.(3.6) is y (x) =
n j =1 n−1 j =0 aj λ j , (3.10) Cj eλj x . (3.11) To prove this you would use induction on n (i.e. you know it is true for some n ≥ 2, so show that it holds for n + 1). 3.3.2 Multiple roots: reduction of order Let’s return to the case n = 2. If it happens that a2 − 4a0 = 0 in Eq.(3.8) then we 1 have a repeated root, and we have to work a bit harder to ﬁnd the second solution. The trick is very similar to the variation of parameters trick of Chapter 2, but for reasons explained below, it is called reduction of order. We know that λ = −a1 /2 is a double root of Eq.(3.7), so one of the two linearly independent solutions is y1 (x) = C0 e−a1 /2x . (3.12) 19 To ﬁnd the other solution, multiply the ﬁrst solution by some unknown function g (x), plug this into the ODE and solve for g (x): y2 (x) = g (x)e−a1 /2x so 0 = (g − g a1 + ga2 −a1 /2x ga1 −a1 /2x 1 )e + a1 (g − )e + a0 ge−a1 /2x 4 2 a2 = (g + (a0 − 1 )g )e−a1 /2x 4 −a1 /2x = (g )e .
a2 The last equality uses the fact that a0 − 41 = 0. Hence g must satisfy the following 2nd order, linear homogeneous ODE with constant coeﬃcients g = 0 Let h = g so that the above equation is equivalent to the following ﬁrst order, linear homogeneous ODE with constant coeﬃcients h = 0, which, we know, has the solution h = C1 =⇒ g = C1 x + C2 . Wrapping this up, we have the second solution: y2 (x) = (C1 x + C2 )e−a1 /2x . The set of functions given by Eq.(3.12) and Eq.(3.13), {y1 (x), y2 (x)} = {C0 e−a1 /2x , (C1 x + C2 )e−a1 /2x } is a linearly independent set (prove it). So the general solution to Eq.(3.6) with n = 2 and a multiple root (a2 − 4a0 = 0) is 1 ˜ ˜ y (x) = C1 y1 (x) + C2 y2 (x) = A1 e−a1 /2x + A2 xe−a1 /2x . (3.14) (3.13) Here I’ve just reorganized/renamed the arbitrary constants. The substitution h = g above is the reason this technique is called reduction of order. In general, using an induction argument, we can show that if λj is a root of the characteristic polynomial with multiplicity k , then {eλj x , xeλj x , . . . , xk−1 eλj x } are linearly independent solutions of the corresponding ODE. 20 3.4 A Scandalously Terse Treatment of the Harmonic Oscillator I would love to tarry a while to smell this rose, but in the interest of maintaining some kind of compatibility with the other sections, I have to be brief. Consider the second order, linear ODE my + cy + ky = f (t). (3.15) This was mentioned in Chapter 1 as a model for the canonical damped spring with forcing. We suppose that m, k > 0, where these represent the mass and spring constant respectively. The forcing is the inhomogeneous term on the righthand side. We’ll explore how to solve the forced oscillator in the next section. For now, let’s look at just the free or unforced oscillator: my + cy + ky = 0. (3.16) Using the techniques of the previous section (plug in the ansatz y (t) = eλt ), we ﬁnd that the characteristic equation is mλ2 + cλ + k = 0, which has the solution λ± = −c ± √ c2 − 4mk , 2m so the general solution is of the form y (t) = a1 eλ+ t + a2 eλ− t = a1 e−ct/(2m) e
√ c2 −4mk/(2m)t √ + a2 e−ct/(2m) e− c2 −4mk/(2m)t . (3.17) First thing to note is that if both λ± < 0 then y (t) → 0 as t → ∞. That is, the motion described by y (t) is damped. The damping will clearly come from the coeﬃcient c in Eq.(3.16) as long as c > 0. If, on the other hand, both λ± > 0, then y (t) → ∞ as t → ∞. If λ− < 0 and λ+ > 0, then it’s uncertain what will happen in general, and we’ll need to look at initial conditions to determine what the particular solution should be. Let’s see what happens if there is no damping, i.e. c = 0. Then my + ky = 0, with roots of the characteristic equation √ −4mk = ±i k/m, λ± = ± 2m (3.18) 21 and the general solution √ √ y (t) = a1 eit k/m + a2 e−it k/m . Recall that √ √ eit k/m + e−it k/m cos( k/mt) = 2 and √ √ eit k/m − e−it k/m sin( k/mt) = 2 (3.19) From this formulation it is clear that the solution, whatever it is, will oscillate. The natural frequency of these oscillations is ω = k/m. The amplitude of these oscillations is determined by the constants a1 and a2 . Now, if 0 = c2 < 4mk , then the term in the surd Eq.(3.17) will be negative, so the general solution will still be complex and exhibit an oscillating behavior. In general, expect oscillatory solutions whenever the characteristic polynomial has complex roots. What’s interesting about this case, though, is that for c > 0 these solutions, even though they oscillate, will be damped to zero as t → ∞ (see Eq.(3.17)). If, on the other hand, c < 0 solutions will grow, or, as we say in the business, blow up (if time is on the order of milliseconds, you can see where this description might be appropriate). In the next section, we’ll see how to solve the forced harmonic oscillator Eq.(3.15). 3.5 Solution of the nth Order Inhomogeneous Equation with constant coeﬃcients are linearly independent functions. We can thus equivallently write the general solution as y (t) = a1 cos(t k/m) + a2 sin(t k/m). (3.20) There are lots of techniques that can be applied, but many of them only apply to ODE’s of a special form. We’ll look at the extension of variation of of parameters used in Ch.2 that, while cumbersome, is a general purpose method. Our goal here is to generalize the variation of parameters technique in Chapter 2 to nth order linear ODE’s. As usual, we will show the n = 2 case to see the pattern and extrapolate this to general n. L(y ) = y (x) + a1 y (x) + a0 y (x) = f (t). (3.21) The basic strategy is to use the solution to the homogeneous problem to construct an ansatz to the inhomogeneous problem. We know from the previous section that the homogeneous solution is yh (x) = c1 y1 (x) + c2 y2 (x) 22 where y1 and y2 are the linearly independent solutions. If the characteristic equation corresponding to Eq.(3.21) doesn’t have multiple roots, λ− = λ+ , then this is yh (x) = c1 eλ− x + c2 eλ+ x . If it does, then the general solution is of the form yh (x) = c1 eλx + c2 xeλx . In any case, we use these to generate the particular solution yp (x) = c1 (x)y1 (x) + c2 (x)y2 (x). Before we go any further, notice that we have two unknowns, c1 and c2 . In order to solve for these unknowns, we need at least two equations. The ﬁrst equation we must solve comes from the ODE: f = L(yp ) = L(c1 y1 + c2 y2 ) = L(c1 y1 ) + L(c2 y2 ) 2 d2 cj yj dcj yj + p1 p2 cj yj = 2 dx d+ j =1 2 dcj yj + cj yj + p1 (cj yj + cj yj ) + p2 cj yj = dx j =1 d· d· = + p1 (c1 y1 + c2 y2 ) + + p1 (c1 y1 + c2 y2 ) + p2 (c1 y1 + c2 y2 ) dx dx d· = + p1 (c1 y1 + c2 y2 ) + c1 L(y1 ) + c2 L(y2 ) + (c1 y1 + c2 y2 ) (3.22) dx This is getting hairy now, so it’s time to impose another equation in order to pin down c1 and c2 (I am in the lucky position to have this freedom). I hate the thought of having c , so I’m going to impose the following j c1 y1 + c2 y2 = 0 which knocks out the ﬁrst operator in Eq.(3.22) and leaves me with
f = c1 L(y1 ) + c2 L(y2 ) + (c1 y1 + c2 y2 ) . (3.23) But, recall that L(y1 ) = 0 and L(y2 ) = 0, so Eq.(3.22) and Eq.(3.23) yield the second manageable equation c1 y1 + c2 y2 = f. (3.24) 23 Writing this using matrix notation we have y1 y2 c1 0 = y1 y2 c2 f Formally, the solution to this, in terms of c1 and c2 is c1 c2 = y1 y2 y1 y2 − 1 − 1 0 f . Integrating both sides, we get (formally) c1 c2 =
x y1 y2 y1 y2 0 f dx We apply Cramer’s rule to explicitly represent the coeﬃcients: c1 (x) = W1 (x)/W0 (x) and c2 (x) = W2 (x)/W0 (x) where W0 (x) := det W1 (x) := det Hence 0 y2 (x) f (x) y2 (x) y1 (x) y2 (x) y1 (x) y2 (x) , y1 (x) 0 y1 (x) f (x) . and W2 (x) := det
x x cj (x) = and, ﬁnally, yp (x) = Wj (ξ ) dξ W0 (ξ ) In general, for any positive integer n, for the linear, nth order diﬀerential operator n−1 (n) L(y (x)) = y (x) + pj (x)y (j ) (x) (3.25)
j =0 2 j =1 Wj (ξ ) dξ yj (x). W0 (ξ ) the linear, nth order diﬀerential equation L(y (x)) = f (x) has the solution yp (x) =
n j =1 x (3.26) Wj (ξ ) dξ yj (x). W0 (ξ ) (3.27) 24 where yj (x) (j = 1, . . . , n) are linearly independent solutions to the homogeneous ODE L(y ) = 0, y1 ··· yn . . . . W0 := det (3.28) , . . (n−1) (n−1) y1 · · · yn and, for j = 1, . . . , n, y1 . . . ··· y j −1 . . . 0 . . . yj +1 . . . ··· yn . . . Wj := det (n−2) (n−2) (n−2) (n−2) y1 · · · y j −1 0 yj +1 · · · yn (n−1) (n−1) (n−1) (n−1) y1 · · · y j −1 f yj +1 · · · yn 3.5.1 Undetermined Coeﬃcients . (3.29) There is another technique that uses less weighty tools and is less systematic, but is sometimes much quicker than variation of parameters. This technique is called the method of undetermined coeﬃcients and works only when the following 2 conditions are satisﬁed: linear inhomogeneous ODE’s with constant coeﬃcients; inhomogeneities whose derivatives generate a ﬁnite set of linearly independent functions. (3.30) (3.31) ...
View
Full
Document
This note was uploaded on 12/02/2009 for the course MATH 352 taught by Professor Staff during the Spring '08 term at University of Delaware.
 Spring '08
 Staff
 Math

Click to edit the document details