This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: DOL‘IL A Brief Eclectic Tour David A. Sﬁnchez
Texas A&M University 198805 9 8 3 0 5 Sanchez, Davud M umnnnmu m 0rdnary differential equauon @ Published and distributed by
The Mathematical Association of America Ordinary Differential Equations \  4m" \\ ‘
#39 ’ CENTRIL
£4» _ CLASSROOM RESOURCE MATERIALS Classroom Resource Materials is intended to provide supplementary classroom material
for students—laboratory exercises, projects, historical information, textbooks with unusual
approaches for presenting mathematical ideas, career information, etc. Committee on Publications
Gerald Alexanderson, Chair Zaven Karian, Editor Frank Farris David E. Kullman
Julian Fleron Millianne Lehmann
Sheldon P. Gordon William A. Marion
Yvette C. Hester Stephen B Maurer
William J. Higgins Edward P. Merkes
Mic Jackson Judith A. Palagallo
Paul Knopp Andrew Sterrett, Jr. 101 Careers in Mathematics, edited by Andrew Sterrett Archimedes: What Did He Do Besides Cry Eureka?, Sherman Stein Calculus Mysteries and Thrillers, R. Grant Woods Combinatorics.‘ A Problem Oriented Approach, Daniel A. Marcus Conjecture and Proof, Mik10s Laczkovich A Course in Mathematical Modeling, Douglas Mooney and Randall Swift Cryptological Mathematics, Robemg d Elementary Mathematical Models, Geometry From Africa: Mathematical and Educational Explorations, Paulus Gerdes Interdisciplinary Lively Application Projects, edited by Chris Amey Laboratory Experiences in Group Theory, Ellen Maycock Parker Learn from the Masters, Frank Swetz, John Fauvel, Otto Bekken, Bengt Johansson, and
Victor Katz Mathematical Modeling in the Environment, Charles Hadlock Ordinary Diﬁ‘erential Equations: A Brief Eclectic Tour, David A. Sanchez A Primer of Abstract Mathematics, Robert B. Ash Proofs Without Words, Roger B. Nelsen Proofs Without Words 11, Roger B. Nelsen A Radical Approach to Real Analysis, David M. Bressoud She Does Math], edited by MarlalParker Solve This: Math Activities for Students and Clubs, James S. Tanton MAA Service Center
PO. Box 91112 . Washington, DC 200901112
18003311MAA FAX: 13012069789 “JUL 0'" U —
Preface — Read This First! A Little History The study of ordinary differential equations is a rich subject, dating back to even before
Isaac Newton and his laws of motion, and is one of the principal building blocks to
describe a process wherein a quantity (mass, current, population, etc.) exhibits change
with respect to time or distance. Prior to the late 1800s, the large part of the study of
differential equations was devoted to trying to analytically describe solutions in closed
forms or via power series or integral representations. The differential equations were
almost lost in the sea of special functions (Bessel, Legendre, hypergeometric, etc.) which
were their solutions. But with the publication of Henri Poincare’s memoir on the threebody problem in
celestial mechanics in 1890, and his later three volume work on the topic, the subject
took a dramatic turn. Analysis was buttressed with geometry and topology to study
the dynamics and stability of ordinary differential equations and systems thereof, whose
solutions could only be approximated at best. Thus was created the qualitative theory
of ordinary differential equations and the more general subject of dynamical systems.
Unfortunately, up until the last half of the past century, very little of the theory was
available to students (with the exception of Russia where a large, inﬂuential school
ﬂourished). Differential equations texts for undergraduates were largely dull “plug and
chug” expositions which gave the lasting impression that the subject was a bag of tricks
and time consuming inﬁnite series or numerical approximation techniques. Some faculty
still believe this. But with the advances made in computers and computer graphics, available to students
through programs like MAPLE, MATLAB, MATHEMATICA, etc., the subject and the
textbooks took on a new vitality. Especially important was the access to very powerful
numerical techniques such as RKF45 to approximate solutions to high degrees of accuracy,
coupled with very sophisticated graphics to be able to display them. Now qualitative
theory is linked with quantitative theory, making for a very attractive course providing 1 valuable tools and insights. Furthermore, mathematical modeling of physical, biological, or socioeconomic problems using differential equations is now a major component of V vi Ordinary Differential Equations—A Brief Eclectic Tour undergraduate textbooks. The slim, dry tomes of the pre—1960s, ﬁlled with arcane
substitutions and exercises more intended to assess the reader’s integration techniques,
have been replaced with 600 plus page volumes crammed with graphics, projects, pictures
and computing. The Author’s View I have taught ordinary differential equations at all audience levels (engineering students,
mathematics majors, graduate students), and have written or coauthored three books in
the subject, as well as over forty research papers. One of the books was a forerunner
of the colossal texts mentioned above, and may have been the ﬁrst book which blended
numerical techniques within the introductory chapters, instead of lumping them together
in one largely ignored chapter towards the back of the book. Several years ago, I reviewed a number of the current textbooks used in the introductory
courses (American Mathematical Monthly), and the article was intended to ask the
question “Why have ordinary differential equation textbooks become so large?” But
on later reﬂection I think the right question would have been whether the analytic and
geometric foundations of the subject were blurred or even lost in the exposition. This
is a question of real importance for the nonexpert teaching the course, and of greater
importance to the understanding of the student. Pick up almost any of the current texts and turn to the ﬁrst chapter, which usually starts
off with a section entitled Classiﬁcation of Diﬁferential Equations. This is comprised of
examples of linear, nonlinear, ﬁrst order, second order, et al., explicit and implicit ordinary
differential equations, with a few partial differential equations thrown in for contrast, and
probably confusion. Then there follows a set of problems with questions like “For the
following differential equations identify which are linear, nonlinear, etc. and what are
their orders?” This section is usually followed by a discussion usually named Solutions or Existence
of Solutions, and the students, who may have a hazy understanding at best of what are
the objects under examination, are now presented with functions (which come from who
knows where) to substitute and to verify identities with. Or they may be presented the
existence and uniqueness theorem for the initial value problem, then asked to verify for
some speciﬁc equations whether a solution exists, when they are not at all sure of what
is a solution, or even what is a differential equation. This confusion can linger well into the course, but the student may gain partial relief by
ﬁnding integrating factors, solving characteristic polynomials, and comparing coefﬁcients,
still unclear about what’s going on. This insecurity may be relieved by turning to the
computer and creating countless little arrows waltzing across the screen, with curves
pirouetting through them which supposedly describe the demise of some combative
species, the collapse of a bridge, or even more thrilling, the onset of chaos. I do not
blame the well qualiﬁed authors of these books; they have an almost impossible task
of combining theory, technology, and applications to make the course more relevant and
exciting, and try and dispel the notion that ordinary differential equations is a “bag of
tricks.” But I think it is worthwhile to occasionally “stop and smell the roses,” as they Preface vii say, instead of galumphing through this rich garden, and that is the purpose of this little
book. A Tour Guide The most important points are 1. This is not a textbook, but is instead a collection of approaches and ideas worth
considering to gain ﬁirther insight, of examples or results which bring out or amplify
an important topic or behavior, and an occasional suggested problem. Current
textbooks have loads of good problems. 2. The book can be used in several ways: a. It can serve as a resource or guide in which to browse for the professor who is
embarking on teaching an undergraduate course in ordinary differential equations. b. It could be used as a supplementary text for students who want a deeper
understanding of the subject. c. For a keen student it could serve as a textbook if supplemented by some
problems. These could be found in other introductory texts or college outlines
(e.g. Schaum’s), but more challenging would be for the student to develop his
or her own exercises! Consequently, the book is more conceptual than deﬁnitive, and more lighthearted than
pedagogic. There is very little mathematical modeling in the book; that area is well covered by the
existing textbooks and computer—based course materials. I am a devotee of population
models and nonlinear springs and pendulums, so there may be a few of them, but mainly
used to explain an analytic or geometric concept. The reader may have to ﬁll in some
calculations, but there is no list of suggested problems, except as they come up in the
discussion. Hopefully, after reading a section, the reader will be able to come up with
his or her own illuminating problems or lecture points. If that occurs then my goal has
been met. A brief description of each chapter follows; it gives the highlights of each chapter. The
order is a standard one and fits most textbooks. Chapter 1: Solutions The notion of a solution is developed via familiar notions of a solution of a polynomial
equation, then of an implicit equation, from which the leap to a solution of an ordinary
differential equation is a natural one. This is followed by a brief introduction to the
existence and uniqueness theorem, and the emphasis on its local nature is supported by a
brief discussion of continuation—rarely mentioned in introductory texts. The discussion
is greatly expanded in Chapter 2. viii Ordinary Differential Equations—A Briet Eclectic Tour Chapter 2: First Order Equations The two key ordinary differential equations of first order are the separable equation and
the linear equation. The solution technique for the ﬁrst is developed very generally,
and leads naturally into an introductory discussion of stability of equilibria. The linear
equation is discussed as a prelude to the higher dimensional linear system, with the
special technique of integrating factors taking a back seat to the more general variation
of parameters method. Some other topics mentioned are discontinuous inhomogeneous or
forcing terms, and singular perturbations. The Riccati equation, the author’s favorite nonlinear equation, is discussed next because
it exemplifies many of the features common to nonlinear equations, and leads into a very
elegant discussion of the existence of periodic solutions. This is a topic rarely discussed
in elementary books, yet it gives nice insights into the subject of periodic solutions of
linear and nonlinear equations. There is an expanded discussion in a later section, which
includes a lovely ﬁxed point existence argument. The chapter concludes with a brief analysis of the question of continuous dependence
on initial conditions, often confused with sensitive dependence when discussing chaotic
behavior. Finally, we return to the subject of continuation, and the ﬁnal example gives a
nice picture of “how far can we go?” Chapter 3: Insight Not Numbers The chapter is not intended as an introduction to numerical methods, but to provide a
conceptual platform for someone faced with the often formidable chapters on numerical
analysis found in some textbooks. There must be an understanding of the idea that
numerical schemes basically chase derivatives, hence the Euler and Improved Euler
method are excellent examples. Next, the notion of the error of a numerical approximation
must be understood, and this leads nicely into a discussion of what is behind a scheme
that controls local error in order to reduce global error. The popular RKF45 scheme is
discussed as an example. Chapter 4: Second Order Equations In a standard one semester course the time management question moves center stage when
one arrives at the analysis of higher order linear equations, and ﬁrst order linear systems.
One is faced with characteristic polynomials, matrices, eigenvalues, eigenvectors, etc.,
and the presentation can become a minicourse in linear algebra, which takes time away
from the beauty and intent of the subject.\ For the second order equation the author
suggests two approaches: a) Deal directly with the general time—dependent second order equation, developing
fundamental pairs of solutions and linear independence via the Wronskian. b) Develop the theory of the twodimensional time—dependent linear system, again
discussing fundamental pairs of solutions whose linear independence is veriﬁed with Preface ix the Wronskian. Then consider the special case where the linear system represents a
second order equation. The emphasis in both approaches is on fundamental pairs of solutions, and linear
independence via the Wronskian test, and not spending a lot of time on linear algebra
and/or linear independence per se. The discussion continues with a now easily obtained analysis of the constant coefﬁcient
second order equation and damping. This is followed by the nonhomogeneous equation
and variation of parameters method, pursuing the approach used for the ﬁrst order linear
equation, and using the linear system theory, previously developed. The advantage is
that one can develop the usual variation of parameters formula, without appealing to the
“mysterious” condition needed to solve the resulting set of equations. The section Cosines and Convolutions is a discussion of the inhomogeneous constant
coefficient equation where the forcing term is periodic. This leads to a general discussion
of the onset of resonance, and for the forced oscillator equation we develop a nice
convolution representation of the solution which determines whether resonance will occur
for any periodic forcing term. The representation can be developed directly, or from the
variation of parameters formula with more effort; Laplace transform theory is not needed. The final section of the chapter gives some of the author’s views on various topics
usually covered under the second order linear equation umbrella. Inﬁnite series solutions
is another point in the introductory course where one must make decisions on how deeply
to proceed. The author has some advice for those who wish to move on but want to
develop some understanding. There is no mention of the Laplace transforrn—any kind
of a proper treatment is far beyond the intent of this book, even with the nice computer
packages which have eliminated the dreary partial fraction expansions. Chapter 5: Linear and Nonlinear Systems Given the material in the previous chapter it is now an easy task to discuss the constant
coefficient linear system, for both the homogeneous and inhomogeneous case (the variation
of parameters formula). But for the twodimensional case, the important topic is the notion
of the phase plane and this is thoroughly analyzed. However, the character of almost all the possible conﬁgurations around an equilibrium
point is more easily analyzed using the twodimensional representation of the second order
scalar equation. The trajectories can be drawn directly and correspond to those in the
more general case via possible rotations and dilations. This approach virtually eliminates
all discussion of eigenvectors and eigenvalues, which is in keeping with the spirit of the
book. Next is given a general rationale, depending on the per capita growth rate, for the
construction of competition and predator prey models of population growth. This makes
it easier to explain particular models used in the study of populations, as well as to
develop one’s own. In Chapter 1 the constant rate harvesting of a population modeled by
the logistic equation was discussed, and a nice geometric argument led to the notion of a
critical harvest level, beyond which the population perishes. A similar argument occurs x Ordinary Differential Equations—A Brief Eclectic Tour for a two population model in which one of the populations is being harvested—this is
rarely discussed in elementary books. There follows a section on conservative systems which have the nice feature that one
can analyze their stability using simple graphing techniques without having to do an
eigenvalue analysis, and also introduce conservation of energy. The book concludes with
a proof, using Gronwall’s Lemma, of the Perron/Poincare result for quasilinear systems,
which is the foundation of much of the analysis of stability of equilibria. It is a cornerstone
of the study of nonlinear systems, and seems a ﬁtting way to end the book. Conclusion I hope you enjoy the book and it gives you some conceptual insights which will assist
your teaching and learning the subject of ordinary differential equations. Perhaps the
best summary of the philosophy you should adopt in your reading was given by Henri
Poincare: In the past an equation was only considered solved when one had expressed the
solution with the aid of a ﬁnite number of known functions; but this is hardly
possible one time in a hundred. What we should always try to do, is to solve
the qualitative problem, that is to ﬁnd the general form of the curve representing
the unknown function. Henri Poincare
King Oscar’s Prize, 1889 Contents Preface — Read This First! v
1 Solutions 1
1 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 More General Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3 Implicit Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 4 Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . 3 5 Existence and Uniqueness—Preliminaries . . . . . . . . . . . . . . . . . 5 6 Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 First Order Equations 9
l The Separable Equation—Expanded . . . . . . . . . . . . . . . . . . . . 9 2 Equilibria—A First Look at Stability . . . . . . . . . . . . . . . . . . . 13 3 Multiple Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4 The Linear Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5 The Riccati Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 6 Comparison Sometimes Helps . . . . . . . . . . . . . . . . . . . . . . . 31 7 Periodic Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 8 Differentiation in Search of a Solution . . . . . . . . . . . . . . . . . . . 36 9 Dependence on Initial Conditions . . . . . . . . . . . . . . . . . . . . . 38 10 Continuing Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3 Insight not Numbers 43
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2 Two Simple Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3 Key Stuff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4 RKF Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5 A Few Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 xi xii Ordinary Differential Equations—A Brief Eclectic Tour
4 Second Order Equations 53
1 What Is The Initial Value Problem? . . . . . . . . . . . . . . . . . . . . 53 2 The Linear Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3 Approach 1—Dealing Directly . . . . . . . . . . . . . . . . . . . . . . . 57 4 Approach 2—The Linear System . . . . . . . . . . . . . . . . . . . . . 60 5 Cements on the Two Approaches . . . . . . . . . . . . . . . . . . . . 64 6 Constant Coefﬁcient Equations—The Homogeneous Case . . . . . . . . 64 7 WhatToDoWithSolutions . . . . . . . .. 67 8 The Nonhomogeneous Equation or the Variation of Parameters Formula
Untangled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
9 Cosines and Convolutions . . . . . . . . . . . . . . . . . . . . . . . . . 76
10 Second Thoughts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5 Linear and Nonlinear Systems 89
1 Constant Coefﬁcient Linear Systems . . . . . . . . . . . . . . . . . . . . 89 2 The Phase Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 3 , The Linear System and the Phase Plane . . . . . . . . . . . . . . . . . . 94 4 Competition and PredatorPrey Systems . . . . . . . . . . . . . . . . . . 99 5 Harvesting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 6 A Conservative Detour . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 7 Stability and Gronwalling . . . . . . . . . . . . . . . . . . . . . . . . . 115
References 127
Index 131 Solutions To begin to understand the subject of ordinary differential equations (referred to as ODEs
in the interest of brevity) we ﬁrst need to understand and answer the question “What is
a solution?”. Perhaps the easiest way is to start the hierarchy of solutions way back in
our algebra days and move towards our desired goal. 1 Polynomials Consider a general polynomial of degree n, and suppose we wish to find a solution of the equation P(x) = 0. In this case a solution is
a number m0 satisfying P (:50) = 0. The ﬁrst question to ask is that of EXISTENCE—does
a solution exist? If the coefﬁcients ak are real the answer is MAYBE if we want real
solutions—contrast P($) 23:2 —1 and P($) =$2 +1. But if we extend the domain of the coefﬁcients and solutions to be the complex numbers,
then the answer is YES, by the Fundamental Theorem of Algebra. The theorem tells us more—it says that there will be n solutions, not necessarily all
distinct, so the question of UNIQUENESS, except in the case of n = 1 when there is
exactly one solution of P($) = 0, is essentially a moot one. The question CAN WE
FIND THEM? would suggest algorithms for ﬁnding solutions. They exist for n = 2 (the
quadratic formula), n = 3 and n = 4, but beyond that the best we can usually do is
numerically approximate the solutions, i.e., “solve” approximately. 2 Ordinary Differential Equations—A Brief Eclectic Tour 2 More General Equations For equations F(x) = 0 where F: R —+ R or F: R" —+ R is a non—polynomial
equation, there are no general results regarding existence or uniqueness, except some
general ﬁxed point results (Brouwer’s Theorem), and for n = 1 something simple like If F is continuous and F(x0) > 0,F(x1) < 0, £0 < m, then F(x) = 0 has at
least one solution in the interval 9:0 < at < x1. Equations may have many solutions e.g., F = sins: — 93/60, or none, e.g., F(x) =
6m — {cl/5, and solution techniques involve some approximation scheme like Newton’s
Method. Remark. An exercise to show the value of some pen and paper calculation before calling
in the number crunching is to ﬁnd the next to last zero of the ﬁrst function, where the
zeros are numbered consecutively starting with m1 = 0. 3 Implicit Equations VNow consider the equation F(x,y) = 0, where for simplicity we can assume that
x,y E R, though the case for x E R, y E R", is a simple generalization. The question
is “Can we solve for y?” that is, can we ﬁnd a solution y(x), a function satisfying
F(x, y(a:)) = 0. In some cases the solution can be unique: 1:2 —1ny = 0;y = ezz, not unique: —a:2 + y2 — 4 = 0;y : \/4 + 932,1; 2 —\/4 + 9:2,
deﬁned only on a ﬁnite interval: 9:2 + y2 — 4 = 0; y = :l:\/4 — x2 only deﬁned for S 2, unbounded: 9:2 + £15 — 9 = 0; y = :l: I2 is only deﬁned for < 3,
not explicitly solvable: F(a:, y) = 9:2 + 4931/2 + sin(7ry) — 1 = 0. 1
«9—— What is needed is some kind of an Existence and Uniqueness Theorem, and it is
provided by the Implicit Function Theorem: Given F(a:, y) = 0 and initial conditions y(a:0) = yo satisfying F(a:0, yo) = 0,
suppose that F(x, y) is continuously differentiable in a neighborhood of (930,110)
and 8F/3y(x0,yo) aé 0. Then there is a unique solution y(x), deﬁned and
continuous in some neighborhood N of (930,110), satisfying the initial conditions,
and F(x,y(a:)) = 0, for 2: E N. The theorem is a local existence theorem, inasmuch as the solution may be only deﬁned
near ($0,110): Example. F(a:, y) = x2 + y2 — 4 = 0, and g—I; 2 2y aé 0 for y aé 0. Let (930,110) =
(1.8, v0.76), then the solution = V4 — 9:2 is only deﬁned for —3.8 S a:—1.8 S 0.2,
OSySZ 1 Solutions 3 Furthermore, the solution given by the theorem is unique, meaning that there cannot exist
two continuous functions which are solutions and whose graphs intersect at (:50, yo). Can we ﬁnd solutions? Only numerically is the usual answer, using a combination of
numerical and implicit plotting techniques. For the example above F(:c,y) = :32 +
4:1:y2 + sin7ry — 1 = 0, F(1,0) = 0, and aF/ay(1,0) = —7r, and a plot of the
solution (see diagram) shows that the solution y(:c) satisfying y( 1) = 0 is only deﬁned
for approximately 0 < :c < 1.2. 1/2 4 Ordinary Diﬁerential Equations We generalize from the previous case F(:c, y) = 0 and consider an equation of the form
F (:5, y, %) = 0, or F (t,:c, ‘31—?) = 0, depending on the choice of :5 (space related) or
t (time related) as the independent variable. In the previous case the solution was a
continuous function—now a solution is a differentiable ﬁmction y(:c) or :c(t) satisfying F(:c,y(:c),y’(:c)) = 0 or F(t,:c(t),:'c(t))r= 0. The equations above are ﬁrst order ordinary differential equations. They are ﬁrst order—only a first derivative of the solution appears. 1
ordinary—only the ordinary derivative y’(:c) or :b(t) appears, not any partial
derivatives. There is only one independent variable. Example. F(t, 9:, y, gg, £5) = 0, would be a ﬁrst order partial differential equation with solution y(:c, t), dependent on two variables. The definition above can be extended to equations or systems like a) F (:5, 35%,3—3) = 0, a second order ODE with solution a twice differentiable
function y(:c), Fl(t,$ayi“11—f)=0 } F2(ta$ay)%¥)=0 . A twodimensional system of ﬁrst order ODEs, whose solution is a pair of scalar
differentiable functions $(t), b) 4 Ordinary Differential Equations—A Brief Eeleefie Tour These generalizations can be further extended, but understanding is best acquired using the simplest case.
The ODE in its implicit form F(t,:c,:ir) = 0 is not very useful nor does it occur
frequently in the real world, since we usually describe a physical process in the form (rate of change) = (function of independent and dependent variables, parameters). Hence, we assume we can solve F(t, cc, = 0 for :b and write the ﬁrst order ODE as d
* d—:: = f (t, cc) (we have chosen t, cc as the variables—use cc, y if you prefer).
Some examples and their solutions follow; you may wish to substitute the solution w(t) to
obtain an equality = f(t, Do not be very concerned about how the solutions
were obtained—that comes later. The present object is to understand the notion of a solution. ODE Solution
is = 2m x t) = 362‘, or m(t) 2 Ce”, C constant
1': = 1+ :52 m(t) = taut, or m(t) = tan(t — C), C constant
is = 3t2 + 4 w(t) = t3 + 4t + C, C any constant
is = —:c + 4t w(t) = Ce‘t + 4t — 4, C any constant
:b=m(1—%) $(t)EKOI'$(t)'=’OOI‘
’I‘,K>0 w(t)=rl_‘fw,w(0)=xoaé0 Suppose w(t) is a solution of *, then we can write 2 f(t, and if we integrate
with respect to t this implies :c(t) = / f(s, :c(s))ds + o = m, C) where F(t, C) represents the indeﬁnite integral, and C is an appropriate constant of
integration. So a general solution will be of the form w(t, C) and we obtain the notion
of a family of solutions parameterized by C. X(t,C 3 )
x(t,C 2) t But we can go further; suppose we are given the initial condition :c(t0) = :50. Then we
want the constant C to be chosen accordingly, and can write the formal expression for
the solution m(t) = $0 + (s,:c(s))ds. It is easily veriﬁed that :c(to) = :50 and = f (t,:c(t)); the expression is a Volterra
integral equation representation of the solution. It is key to the proof of the existence and
uniqueness theorem, but is not very useful since it gives no clue of what is x/ 1 Solutions Example. 1' = if2 + 12, 1(0) = 1, then z(t) satisﬁes z(t) = 1 +/0 [s2 + 12(3)](13, which is not a terribly enlightening fact. 5 Existence and Uniqueness—Preliminaries First of all, if the world’s collection of ODEs were represented by this page, then this dot
0 would be the ones that can be explicitly solved, meaning we can find an explicit
representation of a solution 21(t, C). This is why we need numerical and graphical
techniques, and mathematical analysis to approximate solutions or describe their properties
or behavior. But to do this we need to have some assurance that solutions exist. The statement of existence depends on initial values (initial conditions), so we formulate
the initial value problem: {it = f(t,:lt), $(t0) = 1'0. Analogous t0 the implicit function theorem we want conditions on f assuring us that a solution exists inside some region of the point (150,10).
We want to know that there exists at least one differentiable function z(t), deﬁned near to, and satisfying = f (t, z(t)), and passing through the point (tmzo).
For simplicity suppose that B is a rectangle with center (150,10), B = {(t,z)  t—t0 g a,z—zol g b}.
Then, existence of a solution of the IVP is given by the theorem: Suppose that f(t, z) is continuous in B. Then there exists a solution z(t) of the
IVP deﬁned for It — tol S 7‘ S a, with 7‘ > 0. Pretty simple—but note the theorem is local in nature, since the solution is guaranteed
to exist only for t — t0 S 7‘, and 7‘ might satisfy 7‘ << (1. The rectangle B might be very
large yet 7* might turn out to be much smaller than its lateral dimension. This leads to the
continuation problem which we will brieﬂy discuss after we discuss uniqueness which is
given by the statement: Suppose in addition that (9 f(t, z) / (91 is also continuous in B. Then the solution
of the IVP is unique. This means there is only one solution passing through (150,10), but just remember
uniqueness means “solutions don’t cross!” so we can’t have a picture like this: *4 (to, x0) X20) X10) 6 Ordinary Ditterential Equations—A Brief Eclectic Tour where x1 (t), x; (t) are both solutions of the IVP. From a physical, real world sense such
a situation would be extremely unlikely, and would be a nightmare numerically, since
most numerical schemes try to approximate the derivatives of solutions, and use them
to venture forth from (to, $0). But more importantly, the statement that solutions don’t
cross means that it can never happen. For if two solutions x1(t), x2(t) of d: = f (t,a:)
intersected at any point (tmxa), hence x1(ta) = x2(ta) = 230, then they would both
be solutions of the NP with initial conditions x(ta) = ma, and this can’t occur. This
last observation has important consequences in the study of the qualitative behavior of
solutions. Remark. A slightly more general condition that guarantees uniqueness is that f (t, as)
"'satisfy a Lipschitz condition in B: f(t7x)_f(tiy)l SKx_y’ (t,$),(t,y) inB for some K > 0. But the condition of continuous partial derivative 3; in B is much
easier to verify. Example: Since Hxl — S a: — y then f(t,a:) = satisﬁes a
Lipschitz condition with K = 1 in any B of the form {(t,a:) Ht — to] g a, S b} but
97: is discontinuous at a: = 0. Nevertheless the IVP, a': = x, a:(t0) = 230, has unique Ba:
solutions for any initial value; if x0 = 0 the solution is a:(t) E 0. 6 Continuation This is not a subject easily discussed—the analysis is delicate—but the intuitive result
carries one a long way. Go back to the existence and uniqueness theorem for the NP:
2': = f(t,a:), a:(t0) = x0, and suppose we have found an r and a unique solution a:(t)
defined for [t — tol S r. Since the point (to + r,x(to + r)) = (to + 733:1) is in B, then f(t,x) and 0f(t,a:)/6a:
are continuous in some box B1, possibly contained in B, or possibly extending beyond
it, with center (to + 7", 231). Now define a new IVP i=f(t)x)v x(t0+r)=xla and the existence and uniqueness theorem guarantees a unique solution deﬁned for
It — (to + 7") < r1, r1 > 0, which will agree with a:(t) on their overlapping interval 1 Solutions V , 7 of existence. We have continued or extended the original solution a:(t) deﬁned on the
intervalto—rStgto+rtotheintervalto—r Stgt0+r+r1. It seems plausible that we can continue this game which will eventually stop and the
solution will become unbounded or not be deﬁned, or we can continue forever and the
solution will be deﬁned on an inﬁnite time interval, t < a, t > L3, or ~00 < t < 00.
Furthermore, it seems equally plausible that for the NP, the solution a:(t) satisfying
a:(to) = :60 will possess a maximum interval of existence, to +a < t < to +3, a,ﬂ ﬁnite or inﬁnite and that interval will depend on to and mo. At this point there are several other topics and reﬁnements regarding existence and
uniqueness that should be discussed. '1 will defer them until the end of the next chapter—
some readers may wish to go there directly, but some may want to wrap their hands
around some honest to gosh solutions. First Order Equations The author has long been convinced that a study in depth of the first order, scalar, ODE
is the key to understanding many of the qualitative properties of higher order equations,
or systems of equations. But to spend a lot of time on arcane or rarely encountered
equations (exact, Clairaut’s, Bernoulli), or learning immediately forgotten transformations
which convert some ugly ODE into a solvable, less ugly one will not likely develop that
understanding. Borrow a copy of Kamke’s encyclopedic book, Diﬁerential Gleichgungen,
Losungmethoden und Losungen if you want to see a comprehensive analysis of that little
dot mentioned previously.
The two key equations to be studied are dm E = f (t) g($), The Separable Equation
d
if = a(t)a: + b(t) The Linear Equation and one should continuously remind oneself that a course in ordinary differential equations
is not a postsophomore course in techniques of integration. Following that philosophy
or the one mentioned in the previous paragraph, or slavishly entering the ODE into the
DSolve routine in the computer is what gives the subject the reputation of being a “bag of tricks.” 1 The Separable Equation—Expanded
The ODE 3—: = f (t) g($) can be broken down into three separate cases:
A) g($) E 1, B) f(t) E 1, C) neither A nor B. For pedagogical reasons this makes good sense, since the level of difﬁculty goes up at
each stage. A. g; = f(t) 10 Ordinary Diﬂerential Equations—A Brief Eclectic Tour Assume that f (t) is continuous on some interval a < t < b, then the Existence and
Uniqueness Theorem (hereafter referred to as E and U) gives us that B can be any
rectangle contained in the strip {(t, 2:) I a < t < b, —00 < a: < 00} and a unique‘solution
of the IVP d
d—f = f(t), mo) = 2:0, a < t < b,
exists. Let
t
a:(t) = 2:0 + F(t) = 2:0 + f(s)ds
to and verify that a:(t) is the desired solution, but note that we may not be able to explicitly
evaluate the integral. Examples. 3) 1%: = t2 + sint,a:(1) = 1. Then —00 < t < 00 and a:(t) = 1 + ff(s2 + sins)ds =
% —cost+ (g +cosl)
b) (3—: = t\/1 —— t2, and E and U satisﬁed only for It] < 1; 1 1 = (130 +  tg)3/2 — —‘ t2)3/2 is the solution satisfying a:(t0) = 2:0, Itol < 1. c) ‘31—: = sint2,a:(0) = 4. Then —00 < t < 00 and a:(t) = 4+ fot sinzs ds, but the
integral can only be evaluated numerically (e.g., Simpson’s Rule). To approximate,
for instance, 23(1) you must evaluate 1
2(1) = 4 + / sins2 ds % 4.31027.
0 Going back to the expression for m(t) and substituting t1 for to and x1 for 20 we see
that the solution satisfying a:(t1) = 2:1 can be given by w) = (2:1 — t1 f(s)ds) + f(s)ds = (2:1 F(t1)) +F(t), to
so every solution is a translate of F (t), or any other antiderivative, in the m—direction.
B %% = 9(2)
Assume that 9(22) is continuously differentiable on some interval c < a: < d, so
g(a:) and 3% = g’(a:) are continuous there. Then B is any rectangle in the strip
{(t,a:) I —00 < t < oo,c < a: < d} and a unique solution of the IVP % = g(a:), a:(t0) = 2:0, c < $0 < d, exists. An important case arises here—suppose that g(a:0) = 0. Then the constant solution
a:(t) E 2:0 is the unique solution of the IV P. Such a solution is an equilibrium or singular
solution, and we should look for them all before embarking on the solution procedure. A
further analysis of these solutions will be given in the next section. 2 First Order Equations 11 Suppose 9(5130) aé 0 then consider the expression a: 1 ﬂ — _
A0 ——t to. If a: = :50 then C(xo) = 0 = t — to so t = to. Furthermore, by the chain rule we obtain,
after differentiating each side with respect to t: 1 da: (1
ma = m
Hence, the expression C(15) = t — t0, gives an implicit expression for the solution a:(t) of the IVP.
In theory, since G’ (x) = 1/g(:c) gé 0, then C(x) is invertible and therefore a:(t) = G‘1(t—t0). d _ , da:_ _ _
EG(a:)—G(a:)dt — t to)—1 The last expression is an elegant one, and since 3') $1 {E 1
/ _1_drr _/ idr :/ —d’f‘ : t — t1
x0 90") z, W) x, W) one sees that the solution for any initial value is given by a:(t) = G‘1(t + C) for an
appropriate value of C. Every solution is a translate of G_1(t) in the tdirection. There is a twofold problem, however. It may not be possible to evaluate explicitly
the integral f 1 / 9(1") dr, and even if it is, one may not be able to invert C(55) to get an
explicit expression for a:(t). One can only resort to numerical or analytic (e.g., series)
techniques to obtain approximations of the solution. ‘ Important Remark. Having now satisfied the demands of rigorous analysis, one can
now proceed more cavalierly in an effort to make computation easier. Case A: iii—f = f(t). Write da: 2 f(t)dt then integrate to geta:(t) = ff(t)dt+C
and choose C so that a:(t0) = :50. Case B: ‘31—: = Write dcc/g(a:) = dt then integrate both sides to get
C(15) = t + C and choose C satisfying C(xo) = to + C. Invert if possible to
get a:(t) = G‘1 (t + C(xo) — to); the last two steps can be interchanged if C(m) can be inverted. Examples. a) 35—7: 2 km, then g(:L') 2 km, g’(:L') = k, and —oo < .'L' < 00. Since 9(0) = 0, :L'(t) E 0
' is an equilibrium so we must consider :1! < 0 or .'L' > 0. Proceeding as above dm 1
E = t or glnm=t+C,
then
$(t) = 6lct+lcC = ektekC = 06kt. Since C is an arbitrary constant we can take liberties with it, so ekC becomes C, and
:L'(t) > 0 implies C > 0, :L'(t) < 0 implies C < 0. We may have offended purists who 12 b) C) 4) Ordinary Ditterential Equations—A Briel Eclectic Tour insist that f % dz; = In Incl, but carrying around the absolute value sign is a nuisance,
and the little 0 stratagem forgives all. Then x(t0) = 360 gives 0 = xoe‘kto (so 0
takes the sign of mo) and x(t) = mocha—‘0) which is exponential decay if k < 0 and a:(t) —+ 0, the equilibrium, as t —+ 00
exponential growth if k > 0 and x(t) becomes unbounded as t —+ 00. id?=14'91’12,901’!)=1+a'12,g’(ar:)=2azr:,and—oo<ar;<oo. Then / (Note we have played games with constants again, since to be calculusproper we
should write But let Cl — 0 become C, thus easing bookkeeping). Then z(t) = tan(t + C) and
if :L'(t0) = :30 we have (C(t) = tan(t+ (tan‘1 x0 — to». The real importance of this
equation is that the solution becomes inﬁnite on a ﬁnite interval, —7r / 2 — (tan‘1 9:0 —
to) < t < g — (tan‘1 1‘0 — to), despite the deceiving ubiquity and smoothness of
g(x) = 1 + $2. This is a very good example of ﬁnite escape time—the solution
blows up at ﬁnite values of t. The interval of existence of a solution is limited in ways not apparent from the ODE itself. d1: 1+1; =tan_1x=t+C. d1:
1+x2 =mn4x+C=t+CL ~ £5 = 31?”; this is the time honored representative of nonuniqueness of solutions
of the IVP. Since g(x) = 3:1;2/3 then 9(0) = 0 so x(t) E 0 is a solution, but since
9’ = 2m‘1/3 is not continuous at :1; = 0, there may not be uniqueness, which is the case. If x(t0) = me then
t
to f
550 so z(t) = (t — to + x3/3)3, and if x0 = 0 then z(t) = (t — t0)3. Both it and the
solution x(t) E 0 pass through the point (to, 0). In fact, since 13 2 305—12,)2 equals
0 at t = to we have four solutions passing through (150,0): x(t) E 0, z(t) = (t—t0)3, dt
and
x(t) = { x(t) = { The agreement of the derivative at (150,0) of both solutions makes it possible to glue
them together. dr'
37'2/3 1/3 1/3 _
_ x0 _ =11: u—mf,tgm
0, t>t0 0, t5%
u—mﬁ,t>m’ id? = 31—4 and ‘13—: = 1:4 +4, z(t0) = $0. In both cases, —oo < z < 00, but in the ﬁrst case we obtain the implicit representation 5 5 2: x0
—4=t—t —4,
5+z 0+5+xo 2 First Order Equations 13 which can’t be solved for a: = a:(t), the solution. In the second case we get z 1
dT‘Zt—to
[to r4+4 and the integral cannot be explicitly evaluated. In both cases, you must call your
friendly number cruncher. 0 % = f(t)g($)
If f(t),g(a:) and §;(f(t)g(a:)) : f(t)g’(a:) are continuous for a < t < b and
c < a: < (1, then unique solutions of the IVP are assured. First, ﬁnd all values :3 = k where g vanishes then a:(t) E k are singular solutions. Otherwise, if
dx F.. W, as»: a... then an implicit expression for the solution a:(t), satisfying the I C', $(t0) = 3:0 is
C(95) = F(t) + 0%) — F(to) Since G’(a:) = g(a:) gé 0, then G is locally invertible and a:(t) = G_1(F(t) + G(a:0) —
F(t0)). All the joys and headaches of the previous case remain, and of course, we can
“separate” variables to write the informal solution approach dz
g—($—) = f(t)dt, etc. Examples. Build your own, but here’s a nice one showing how the maximum interval,
of existence of a solution can depend on the initial values: ‘ d
£2—3z4/3sint, —oo<t<oo,—oo<a:<oo.
Then f(t) = sint, g(a:) = —3a:4/3 and g’(a:) = —4a:1/3 are continuous, and
da: m = —3sint dt implies —3:I:'1/3 = 3cost + C'. Letting C become —3C' gives the solution a:(t) = (C — cos t)_3. If C' > 1 then it is
deﬁned for —00 < t < 00, whereas if C' S 1 it will only be deﬁned on a ﬁnite interval.
For instance a:(7r/2) = § implies C = 2, a:(t) = (2 — cost)_3, —00 < t < oo,
—3
a:(7r/2) = 8 implies C = 1/2, a:(t) = — cos t) , which is deﬁned only for 7r / 3 < t < 57r / 3. 2 Equilibria—A First Look at Stability We consider the ODE % = g(a:), where g(a:) and g’ are continuous for :3 in some
interval a < a: < b, and suppose g(a:0) = 0, and 0:0 is an isolated zero of 9. Then 14 Ordinary Diﬂerential Equations—A Brief Eclectic Tour a:(t) E x0 is a constant solution of the IVP d2: E = 9(x)7 $00) = x0, for any to, and is deﬁned for —00 < t < 00. We call such a solution an equilibrium or
singular solution, but perhaps a better terminology is 2:0 is a critical point of the ODE.
The key to the characterization of 2:0 is the value of g’ (270). g’(:1:o) > 0: Then % = g(a:) is negative for a: < 270 and near (to, and is positive for
a: > £0 and near (to, since g(a:) is an increasing function near 270 and vanishes at 2:0.
Consequently, any solution a:(t) with an initial value x(to) = x1, £1 < x0 and x1 near
me, must be a decreasing function so it moves away from 2:0. But if x1 > 270 and 271 near
$0 the solution must be an increasing function and it moves away from me. We have the picture, X A /1 \ xoﬂx The middle picture is one of the phase line where we regard solutions a:(t) as moving
points on the xaxis—it is a useful way to display the behavior of solutions in the
onedimensional case but the third picture is more illustrative. We can call such a critical
point an unstable equilibrium point or source (a term inherited from physics). In physical
terms we can think of the ODE as governing a system which is at rest when a:(t) = x0,
but any small nudge will move the system away from rest. The solutions are repelled
by their proximity to 2:0 which gives rise to another colorful name: x0 is a repeller or repelling point. g’(a:o) < 0: The previous analysis repeated leads to the following set of pictures: X \Xo\
/ x x(t) x(t) 
¥ x _)_._%> X t
x 0 0 We call such a critical point a stable equilibrium point or sink (physics again lends a
nice household name). In physical terms the system is at rest when .r(t) = x0, and under
any small nudge the system will attempt to return to the rest position. Nearby solutions
are attracted to 270 which gives us another name: 270 is an attractor or attracting point. This is our ﬁrst venture into the qualitative theory of ODEs, and the important point
is that the analysis allows us to describe the behavior of solutions near :30 whether we can explicitly solve the ODE or not! 2 First Order Equations 15 Example. % = ﬁn: 75 1. Then 9(0) 2 0, and g’(0) = —1 so 9730 = 0 is
a stable equilibrium point. Solving the equation leads to the implicit expression for
:1: = alt(t): :1: —1nar: = t + C which we can’t solve for x = {C(t). But we have a very good picture of what happens to solutions whose initial values are close to 0. The last example brings up a point worth amplifying. Suppose x0 is a stable equilibrium
point and g’ (:50) < 0. Then given any solution a:(t) with initial value x(t0) = $1, with
x1 > .730 but sufﬁciently close to .730, we have d1;
dt t=to hence a:(t) is decreasing near t = to. But now apply the existence and uniqueness
criteria, and the continuation argument previously given, and we can infer that a:(t)
must continue to decrease, but since it can’t cross the line a: = x0, we conclude that tli’m a:(t) = .730. A rigorous argument justifying the last assertions, which are correct,
00
might take a few paragraphs, but we should follow our geometric hunches—they are rarely wrong (frequently right?).
The last statement leads to the deﬁnition of the asymptotic stability of .730, for which a
homespun characterization (avoiding Es and 6s} at this point is all that is needed. = g(a:1) < 0 (go back to the relevant picture) The equilibrium point x0 is asymptotically stable if once a solution a:(t) is
close to $0 for some t = t1, it stays close for all t > t1, and furthermore 3,1213“) 2 are. To expand on this requires the subtle distinction between stability (once close, always close) and asymptotic stability. This distinction is a much richer one in the study of
higher dimensional systems, and makes little sense for onedimensional ODEs, unless we want to do a little nitpicking and discuss the case g(a:) E 0 when every .730 is a solution,
and so is every $1 as close to me as we want. g’(zo) = 0: This is the ambiguous case, and requires individual analysis of an ODE
having the property. We can have me is stable: e.g., x0 = 0 and g(a:) = —2a:5,
x0 is unstable: e.g., x0 = 0 and g(a:) = .733; the reader is left to verify the conclusions. But a third possibility arises in which x0 is a
semistable equilibrium point. Example. ff 2 :52, so 9(0) = 0, and g’(0) = 0. Solve the equation to obtain a:(t) = —(t + 0)“. Let a:(0) = —e, e > 0 then a:(t) = — (t + 9—1 which approaches
x0 = 0 as t —> 00. But let a:(0) = e, e > 0 then a:(t) = — (t— 9—1 which is an increasing function for t > 0 and blows up when t = A very annoying case, both mathematically and physically! This is a very good time to introduce the notion of linearized stability analysis which
is a formidable name for a simple application of Taylor’s Formula, and will be more
extensively used in the study of higher dimensional nonlinear systems. 16 Ordinary Ditterential Equations—A Briet Eclectic Tour If :00 is an equilibrium or critical point of the ODE dcc/dt = g(:c), then to study its
stability we can consider a nearby solution x(t) and write it as a:(t) = :50 + n(t), where
n(t) is regarded as a small perturbation. Since :00 + n(t) is a solution then $050 + nu» = o + ﬁne) = 9(a) + nu», and if g(:1:) is sufﬁciently smooth we can apply Taylor’s Formula, recalling that g(:c0) = 0.
We get d 2 .
d—Z = 0 + g’ (150)77 + g”(:co)%‘— + (higher order terms)
and the key assumption is that the quadratic and higher order terms, which are very small
if n(t) is small, have little or no effect, and the dominant equation governing the growth
of 77(t) is the linear approximation
d7]
E = g’($o)n or W) = 0 exp [g'($o)t] We see immediately that
g'(:c0) < 0 implies n(t) —v 0 as t —v ()0 hence 1:0 is stable,
g’(:co) > 0 implies 77(t) becomes unbounded as t —» oo hence x0 is unstable. To analyze stability in the semistable case we have to ﬁnd the ﬁrst nonzero value g(") (21:0)
and solve the resulting nonlinear ODE. But for the case g’(:c0) 76 0 the linearized analysis
backs up our previous discussion, and will be crucial when we discuss equilibria of
higher order systems, when dimensionality takes away our nice onedimensional, calculus
approach. 3 Multiple Equilibria For a start consider the equation % = g(a:) where g(:1:1) = g(a:2) = 0 and :51 < 1:2; for instance, g(:c) = m(:c — 1) and :51 = 0, :52 = 1. We will assume that g(:c) and g’(m)
are continuous so E and U of the solution of the WP is assured. Thus, we have the two
solutions 11:1(t) E 51:1, 11:2 (t) E 2:2, and the picture X 2(0 X10) The most important conclusion we can draw'from the picture is Every solution m(t) with initial conditions m(to) = :50, where 231 < 930 < :52, is
deﬁned for —00 < t < oo. 2 First Order Equations 17 This is an important but frequently overlooked fact.1 The reasons for this conclusion are the continuation property and uniqueness of
solutions. The existence and uniqueness result tells us that the solution will be deﬁned
for some interval to — 1' S t S to + 1', 1' > 0, and for that interval the solution cannot
intersect the solutions x1(t) or $2(t) by uniqueness. That means that the end points
x(t0 — 1') and x(to + 1') will be inside the interval 101 < x < 102, and we can apply
the existence and uniqueness theorem and further extend m(t), over and over again. The
solution x(t) is trapped between the bounding solutions m1 and $2 and no matter how it
wiggles it can never escape, but must move inexorably forward and backward. This is a
wonderful result, since determining whether solutions exist for all time and are bounded is a tricky matter: ‘31—: = cosx — $2, and = 0 at x R: :l:0.82413 so every solution with initial
values x(t0) = C, C < 0.82413 is deﬁned for —00 < t < 00 and satisﬁes x(t) < 0.82413. But we can go a little further and discuss stability, in the case of two equilibria, $1 and {112, {111 < {112.
Since g(a:) does not change sign between $1 and 152 it must be either positive: in which case (ii? > 0, so any solutions a:(t) between $1 and 162 must be strictly increasing, but remember “solutions don’t cross.” Consequently tlgglo $(t) = :52. or
negative: the same line of reasoning tell us that any solution between $1 and 152 must satisfy tlim a:(t) = :51.
—>oo X x
x2 2
g(x)>0 \ x1 \ good) x1 Note: that we only need to check the sign of g(a:) at any point between $1 and 152 which
makes the job even easier. In the example above g(0) = 1 > 0 so we conclude any solution :50?) with initial
value a:(t0) = C, C < 0.82413 satisﬁes tl_i>m x(t) = 0.82413.
00 Now what remains is to examine the behavior of solutions with initial values :50 less
than $1 or greater than 152. But since g(a:) cannot change sign for m < $1 or x > £2 we
will get pictures like: 1 At no cost you have discovered that the ODE has bounded solutions! 18 Ordinary Diﬂerential Equations—A Brie! Eclectic Tour AX \\ X /
__l\¢__>_d stable A semimble
xz/
semistable semistable
x1/
t 3. t
0  0 0 (or similarly xI stable and x2
semistable) The semistable case would be troublesome for any physical model; a solution could be
close to an equilibrium value but a small change in the data could make it veer off. A useful and popular model of the ﬁrst picture above is the logistic equation of
population growth: drc _ m ( cc ) r = intrinsic growth rate > 0, Et 1 _ R'— ; K = carrying capacity > 0, with equilibria 0:1 = 0 (unstable), and :22 = K (stable). The model is discussed in
many of the current textbooks, as well as the variation which includes harvesting, ﬁrst
discussed by Fred Brauer and the author in a 1975 paper in the journal Theoretical
Population Biology. That equation is da; :1: .
a — m; (1 — E) — H, H —— harvesting rate > 0, and a little calculus and some analysis shows that as H increases from O, the unstable
equilibrium increases from 0, and the stable equilibrium decreases from K. When H
reaches rK/4 they both coalesce and for H > rK / 4 we have iii—f < 0 for all at so the
population expires in ﬁnite time. For devotees of bifurcation theory the behavior can also
be analyzed from the standpoint of a cessation of stability when the parameter H reaches
the critical value rK / 4: a saddlenode bifurcation. One can now pile on more equilibria and look at equations like d2; E
and put them on the computer and look at direction ﬁelds, a pleasant, somewhat mindless
exercise. Things can get wacky with equations like =(m—x1)(m~m2)~(m~wn) 2 First Order Equations 19 1 . 1 d—x = sin —, with an inﬁnite number of equilibrla urn = i—, n = 1,2,. . .,
dt :1: mr
converging to 0 and alternating in stability, or 0 . . .
% = sin2 0, with an inﬁnite number of semistable equ111br1a axn = 0, in, d0 i27r, . . ., since in the vicinity of any equilibrium 3 > 0. The last example will come up later in the discussion of the phase plane. The case where the righthand side of the differential equation depends on t and :1:
(called the nonautonomous case) is much trickier, and while there are some general results,
the analysis is usually on a case by case basis. The following example is illustrative of
the complexities. drc t(1 — :52)
E l . — = — 0.
xampe dt $(1+t2) $76
We see that 2:1(t) E 1, :52 (t) E —1 are solutions, so by uniqueness all other solutions lie aetween —1 and 1, are below —1 or are above 1. Now we must do some analysis: Case: 0< Iml <1. Then‘i—f:0whent=0and da: _ t>0 da: . t<0
d—t>0 1f{$>0 E<O 1f{$>0
da: . t<0 da: . t>O
dt>0 1f{ar<0 tit<0 1f{ar<0 Ease: [ml > 1. Then % = 0 when t = 0 and d
—$>01f{t>0 ﬂ<01f{t<0 dt a:>1 dt a:>1
dIL‘ . t>0 dar: . t<0
dt>0 1f{ac<—l E<0 1f{x<—1 Since the equation is invariant under the change of variable t —> —t we can conclude
at If x(t0) = $0 > 0, then lim x(t) = 1, and t—tioo if x(t0) = £0 < 0, then lim x(t) = —1. t—> :too )lutions have a maximum or minimum at 3(0) which becomes cusplike as x(0) gets
)ser to zero. (See diagram.) The effect of t on the asymptotic behavior of solutions quite distinct from the autonomous (no tdependence) case when solutions are strictly
)notonic. 20 . Ordinary Ditterential Equations—A Briet Eclectic Tour Admittedly, all but the last analysis could be done on a onedimensional phase line,
but one loses the attractive geometry of solutions moving around in the (hm) plane.
Furthermore, the approach taken is good training for later discussion of the phase plane,
when solutions become trajectories carrying parametric representations of solutions. 4 The Linear Equation
We begin with the homogeneous linear equation
i = a(t)w, a(t) continuous on p < t < q, which is a separable equation whose solution is x(t) = $0 exp U a(s)ds] satisfying the initial conditions note) = $0. If we deﬁne <I>(t) = exp [ ft: n(s)ds] then
we can express the solution as m(t) = w0<I>(t). This is the onedimensional version of
the solution of the ndimensional linear system = A(t)m where w = col(m1, . .. ,mn)
and A(t) = (aw(0) is an n x 77, matrix with continuou~s components. The solution
is £(t) = $09.30 where @(t) is a fundamental matrix satisfying the matrix ODE
<i> = A(t)q>, m0) = I. The fundamental matrix <I>(t) is a solution of the matrix ODE (l) = A(t)<I>. In the case
where A(t) = A we have the series representation of the matrix exponential
2 3
<I>(t) =exp[tA] =I+tA+ %A2 + :7A3+' which converges, and is a nice formal representation, but nobody in their right mind
would compute it this way to solve the matrix ODE.
If we consider the nonhomogeneous linear equation i = a(t)w + b(t), a(t), b(t) continuous on p < t < q, 2 First Order Equations 21 then the solution satisfying a:(to) = x0 is given by M me) = exp [/t:a(s)ds] [x0 + exP [_ We] eons] . The representation is easily obtained using the very important variation of parameters
technique. Let a:(t) : exp a(s)ds] u(t) = <I>(t)u(t), where we hope to ﬁnd u(t). If
the initial conditions are a: to) = :50 then u(t0) = :50 since @(to) = 1. Substitute into the ODE to get at = M + <i>u = a(t)<I>u + b(t),
and since (i) = a(t)<I> we cancel to obtain M = b(t) or it = <I>—1(t)b(t). This is solved immediately 13
u(t) = x0 +/ <I>_1(s)b(s)ds,
to \
and since <I>‘1(s) = exp [— a(r)dr] we get (*).
But note the very important feature of the proof above, namely that <I>(t) could be
instead the fundamental matrix of the n dimensional linear system a': = A(t):c, satisfying @(to) = I, the identity matrix. The solution of the nonhomogeneous linear system
:i: = A(t):c + B (t), where B (t) is an ndimensional continuous vector, is given by (*)n g(t) = <I>(t) [go + f t <I>1(s)§(s)ds] to which is simply the ndimensional version of What is the point of escalating to this level of generality now? It is because almost all
introductory texts write the first order scalar nonhomogeneous linear equation as :i: + a(t):c = b(t), a(t), b(t) continuous on p < t < q, so they can use the traditional ploy of integrating factors. Note that the above expression
is out of line with the universally accepted way to write the ndimensional version as :i: = A(t)a: + B(t).
The integrating factor approach is to multiply the equation by the integrating factor
exp a(s)ds], then exp [ft a(s)ds] e + exp [ft a(s)ds] a(t)a: = exp [/t a(s)ds] b(t). One sagaciously notes that the lefthand side can be expressed as an exact derivative and we get
% (exp a(s)ds] is) = exp 0(3)d5] W) 22 Ordinary Differential EquationsA Brier Eclectic Tour Consequently, exp [ ands] x(t) = x0 + exp [ a(r)dr] b(s)ds, and solving for x(t) gives the expression (*), but with a(t) replaced by —a(t). This last remark is a source of confusion to many neophytes, since they first study the
homogeneous equation a’: = a(t)x, not i: — a(t)x = 0, then advance to a‘: + a(t)a: = b(t).
Consequently, the sign change can cause problems in choosing the “right” integrating
factor. Secondly, the integrating factor approach has no analogue for higher dimensional
systems. Write g = A(t)g + as — A(t)g = 1230?), then you must multiply by i>_l(t) to obtain <I>‘1(t)g' — <I>_1(t)A(t) 252
ll <I>_1(t)§(t). Now you have to ﬁgure out whether the lefthand side equals %(<I>—1(t)x) when all you know is that <I>(t) satisﬁes the matrix ODE <i> = A(t)<I>. Tricky business! You may not be convinced by the passionate argument above to eschew the “Find
the integrating factor!” trick, and that’s okay. Remembering (*) means memorizing a
formula, something held in low repute by some of today’s mathematical educators. We
will see that not using (*)n in developing the variation of parameters formula for higher
order scalar ODE’s makes life harder and introduces some algebraic complications. But
choose your own petard. Now let’s turn to solutions. The formulas are there and if ft a(t)dt has a known
antiderivative then we can ﬁnd an explicit solution of a': = a(t)x. Furthermore, if the
same is true of ft exp [— [3 a(r)dr] b(s)ds then the nonhomogeneous problem can be
solved using Given the initial value x(t0) = 330 we can either incorporate it by using
the deﬁnite integral or just use the indeﬁnite integral and an arbitrary constant to be
evaluated later. The ﬁrst approach is a little more efﬁcient and reduces the chance of
algebraic errors. If either or both of the deﬁnite integrals cannot be evaluated then we
must resort to numerical methods. The problem with many resentations of the solving of the nonhomogeneous equation
is that the integral exp a(r)dr‘] b(s)ds is used as a test of the student’s ability to
use (sometimes obscure) techniques 0 integration in what are obviously rigged problems.
A relatively simple example is at: (tant)x+esmt, 0<t<7r/2, which the reader can work out, and note that if taut were changed to — tan t the problem
is intractable. It pays to keep in mind that the subject being studied is ordinary differential
equations not integral calculus. It is extremely important to emphasize that is a representation of the very important 2 First Order Equations 23 fact, true for all linear‘ systems—the solution $(t) is the sum of t
exp [/ a(s)ds] 2:0 — the general solution of the homogeneous
to equation i = a(t)x, and ppm = exp [ a(s)ds] exp [— mm] b(s)ds — a particular solution of the nonhomogeneous equation. The effort to ﬁnd xp(t) must be an expeditious one, and almost all textbooks fail to
point out that the method of undetermined coeﬁ‘icients (or judicious guessing), extolled
for higher order constant coefficient linear equations, works for ﬁrst order equations as well.
Consider the simple IVP i=2x+3t2—t+4, a:(0)=4. Using the expression for atp(t) above we obtain
t
xp(t) = e2t/ ¢3_25(3s2 — s + 4)ds
0 which will involve at least three integrations, mostly by parts, unless one happens to have a table of integrals or some sophisticated software handy.
But all the integrals will have a factor e‘zt which will be cancelled by the term e and one is left with a polynomial of degree 2. So why not try mp(t) = At2 + Bt + C,
substitute into the ODE and solve for the arbitrary constants? Proceed: 2t
, at},=2At+B=2(At2+Bt+C)+3t2—t+4,
then equating like powers of t gives
0=2A+3, 2A=ZB—1, 3220—4. Solving these we obtain the general expression for 17(t) x(t) = xoezt — gtz — t + g, and a:(0) = 4 gives x0 = 5/2. Simplicity itself! In the above problem, if the polynomial term were replaced by asin qt or bsin qt
the trial solution would be xp(t) = Asin qt + Bcos qt—you must include both. If
instead the term were ace“ the trial solution would be $p(t) = Aek‘ except in the case
where k = 2, when it would be mp(t) = Ate”, since As” is already a solution of the
homogeneous equation. Students (and paper graders) will appreciate these obviations of
tedious integrations. The subject of existence of periodic solutions, when a(t) and b(t) are periodic will be
covered in a later section when the linear equation and more general equations will be
discussed. But an interesting topic is the case where b(t), the forcing or nonhomogeneous
term, is discontinuous. For instance, b(t) could be a pulse which is turned on for a certain 24 Ordinary Diﬂerential Equations—A Brief Eclectic Tour time interval, then turned off. Although the E and U theorem for d: = f (t, 9:) requires
that f be continuous in t, for the linear equation one sees that the integral expression exp i‘ WW] b(s)ds is a continuous function of t, even if b(t) is piecewise continuous. (One of my professors
stated that integration make things nicer, whereas differentiation makes them worse—not
a bad maxim). Consequently, the solution will be continuous, but have a discontinuous derivative.
The last observation takes us into the realms of higher analysis, where solutions are regarded as absolutely continuous functions, but this does not mean we need to ignore
the question since it provides some nice problems. Consider the general case with one discontinuity; more can be considered but this
just involves extra bookkeeping. For simplicity, we let a(t) = a since the expression
‘ exp [ a(s)ds] plays no real role in the discussion. We can also let to = 0, therefore a‘: = am + b(t), 21(0) = 1:0 where bu) _ bl(t), continuous, 0 S t 3 t1
_ b2(t), continuous, t1 < t, and b1(t1) 76 b2(t1+), so b(t) has a jump discontinuity at t = t1.
If we use (*) and follow our noses we get the following expression for the solution: it
a:(t) = 9306‘” + e“‘/ e—“b1(s)ds, 0 S t 3 t1,
0 t1 t
a:(t) = 9306‘” + e“/ e“‘”b1(s)ds + e‘”/ e"‘”b2(s)ds, t1 < t.
0 t1 The integral expression is selfevident and what it really does is give us a method of
solution which is: Solve the IVP a': = az+bl(t), 93(0) : 9:0, 0 _<_ t 3 t1, and compute z(t1) = 2:1. Solve the IVP 9': = (19: + b2(t), z(t1) = ml, t1 < t, then glue the two solutions
together at t = t1. We can use (*) and the relation eate‘“ = raw—3) to write a compact expression for
the solution it
z(t) = moeat +/ ea(t—s)b(s)ds. to This is the elegant convolution integral representation of the solution, which is of greater
interest when we discuss higher order equations; it is unfortunately usually never covered
until one discusses Laplace transform theory. 2. First Order Equations 25 —1,0§tgl . h
1, t>1 Te" Simple example: 11': = 2:1: + b(t), $(0) = 4, b(t) = { t
7
a:(t) = 462‘ +/ 62(“s)(—1)d5 = 562‘ + g, 0 S t < 1,
o 1 t
a:(t) = 46” +/ e2(t‘s)(—1)ds +/ 62(t'3)(1)d3
o 1 1
= Zez" + 62(t_1)— — t > 1. 2 2’
We see that the two solutions agree at t = 1, but d_a:
dt da: 2
— = 2.
dt 76 + t=l+ = 762,
t=l— The second expression for a:(t) can also be obtained by solving :i: = 2m + 1,
:1:( 1) = %62 + Finally, in keeping with this book’s philosophy that the ﬁrst order equation is an
excellent platform to introduce more general topics at an elementary level, the author
would like to discuss brieﬂy the subject of singular perturbations. This will be done
via a simple example which gives a glimpse into this fascinating topic of great interest
in applied mathematics. For a more indepth discussion see the references by Robert
O’Malley. Consider the IV P em=—$+1+t, a:(0)=0 where e > 0 is a small parameter. The solution is a:(t) = (6 — 1)e_t/6 +t+ (1 — e) and we see that it has a discontinuous limit as e —+ 0+ . _ 0, t=0
el—1>I(I)1+$(t)—{1+t, t>0 Furthermore, if one is familiar with the “big 0” notation, then 1 1
d:(t)= ——1 e_‘/‘+1%— for 0<t<0(e),
E 6 so what we have is a thin 0(6) region of transition, called a boundary layer, in which
the solution rises rapidly from (0,0) and then approximates $(t) = 1 + t. 26 Ordinary Differential Equations—A Briet Eclectic Tour The function $(t) = 1+t obtained by setting 6 = 0 in the ODE is called the outer solution.
The phenomenon described is very important in ﬂuid mechanics where it models the ﬂow
of a liquid in a pipe near the walls of the pipe. Another interesting example is the IVP Cit = —(sintcost)z, x(0) = 1, and the points t = mr/2, n = 0, i1, i2, . . . , where a(t) = sintcost = % sin 2t vanishes
are called turning points. As before, 6 > 0 and small, and the solution is x(t) = exp(——(sin2 t)/2e). The solution is negligible except at every other turning point t = 0,:l:7r, i27r, . . . ,
where it has steep spikes of height 1 and width 0(\/E). For instance, z(0) = 1 and = —‘3i’2‘6r‘)texp[—(sin2 t)/2e], and since sin0 % 0 for 0 small we see that if
0 < t < 0(\/E) then £(t) m —0(1/\/E) —~> —00 as e —~> 0+. X Unless one uses special numerical routines the solutions near the spikes will be tricky to
compute. 5 The Riccati Equation Although the reader was admonished to stay away from exotic special ODEs, the Riccati
differential equation (ﬁrst proposed by Count Jacopo Riccati, 1676—1754)2 appears more 2 The spelling of the ﬁrst name was found on a picture the author has of the good Count. Sometime after
1754 the Italian language dropped entirely the J and replaced it with a Gi. 2 First Order Equations 27 frequently in theoretical and applied analysis than the rest of the lot. Besides, it is the author’s favorite equation.
The Riccati equation is a first order equation with a quadratic nonlinear term (the worst kind, some say), and is 1W) = 1009032 + q(t)w + 7"(t), where p, q, and are continuous on some interval a < t < b. It occurs in applications,
some of which are: a) b)
The WKB method in asymptotic analysis: for the second order linear equation
:17 — A2p(t)y = 0, the transformation a:(t) = y(t)/y(t) converts it to the Riccati equation a': + m2 = A2p(t), and y(t) = Cexp [ft a:(s)ds].
The Riccati equation determines the feedback—control law for the linear regulator problem with quadratic cost. In the onedimensional case this is the following: a
system with output ar(t) and input or control u(t) is governed by the linear ODE {1‘} = a(t)$ + b(t)u, $(t0) = $0, to < t < t1. The object is to choose a control u = u(t) so as to minimize the cost functional or
performance measure 1 f1 2 to where wl (t) and w2 (t) are given weighting functions, and k 2 0.
Optimal control theory gives that the solution is realized by the feedback control
law: Clu] = My + [w1(t)$(t)2 + w2(t>u(t>2]dt, um =p*(t>x(t>, W) = ﬁgmtx
where p(t) satisfies the Riccati equation
1') = 2a(t)p+ “02 :02 — w1(t), p(t1)= k
W205) The control u(t) is a feedback control since its present value is determined by the
present value of the solution $(t), via the feedback control law. A simple example
of feedback control is the household thermostat. (A sample problem is given at the
end of the section). For higher dimensional systems the Riccati equation is replaced
by a matrix form of the equation. The logistic equation, with or without harvesting, and with time dependent coefﬁ
cients K(t) w) = 71m (1 — ) — H(t), is a Riccati equation. For instance, one could consider the case r, K constant and
H (t) periodic, corresponding to seasonal harvesting. 28 Ordinary Diﬂerential Equations—A Brief Eclectic Tour The last example will be discussed in a later section when we consider the general question
of periodic solutions. Riccati equations are notorious for having ﬁnite escape times, e. g., our old friend
at = 2:2 + 1 with solutions a:(t) = tan(t — C), and there is no general solution technique.
But there are a few rays of hope. For instance, in the simpler case r(t) E 0, the equation a: = p(t)ar:2 + q(t)a: is an example of a Bernoulli equation. The trick is to let a:(t) = 1/u(t) hence it = —1'1./u2
and we obtain the linear equation a = —q(t)u  p(t), which you may not be able to solve exactly, but at least have a formal expression for
the solution. In the general case where 7'(t) gé 0 the transformation just gives you back
another Riccati equation. Instead try the transformation 1 W)
a: t = ———,
( ) pa) ya)
and you obtain the second order linear equation
.. 15(0) .
— qt +— +7‘t t :0,
y (0 pm y 0pm which probably won’t make life much easier.
But suppose by a stroke of luck, or divine intervention, you can ﬁnd or are given one
solution a:(t) = ¢(t). Then let a:(t) = ¢(t) + 1/u(t), and substitute:  a _ 2 2¢ 1 1
a» u, —p(t)[¢ + u W] +q(t)(¢+u mt)
which simpliﬁes to the linear equation it = —(  2p(t)¢(t) + q(t))u — p(t), since qb(t) satisﬁes the original equation. Here are several examples; the second one sheds
further light on the notion of stability: a) at = x2 + 1 — t2 and a little staring reveals that ¢(t) = t a solution. Letting
a:(t) = t + % gives a 1 2 2
1——2= t+ +1—t
u u which simpliﬁes to 11 = —2tu — 1 with general solution u(t) = e_t2 [0 — ft e’zdr] . If we specify initial conditions a:(0) = 2:0, then a:(t) = ¢(t) = t is the solution if 2:0 = 0. Otherwise t2 CI}
1 t
u(t) ._0_f 81‘2d7 2 b) First Order Equations 29 and we observe distinct behavior depending on the value 2:0, since the integral is
a positive increasing function of t. If 2:0 < 0, then solutions are deﬁned for all t,
whereas if 2:0 > 0 then for some value t = q > 0 the denominator will equal zero,
and the solution becomes inﬁnite as t increases towards q. Applying L’HOpital’s rule to the quotient we see that it approaches —2t as t —> 00
so 23(t) approaches —t; all solutions with 23(0) = 2:0 < 0 become asymptotic to the
line x(t) = —t, which is not a solution: ‘. X=—t 2': = —a:2 +2xt— t2 +5 = —(a: — t)2 +5. We see that the straight lines x(t) = t +2,
and a:(t) = t — 2 are both solutions. By uniqueness, no solution can cross them, so
we can conclude, for instance, that any solution 23(t) satisfying S 2 is deﬁned
for all time. Letting x(t) = t — 2 +% gives it = —4u + 1, so which approaches t+2ast—>oo. Ifx(t)=t+2+%wegetil=2u—1 so $(t)=t+2+—1,
Cezt+§ and a:(t) approaches t + 2 as t —> 00. We can conclude that a:(t) = t + 2 is
asymptotically mm“ 30 Ordinary Diﬂerential Equations—A Brie! Eclectic Tour The above solution technique provides an amusing story. In the 1980s a ﬁrm entitled
MACSYMA was widely advertising a symbolic and numeric computation package. One
of their advertisements appeared frequently in the American Mathematical Monthly and
showed four people in deep thought, staring at a blackboard—a clock on the wall hinted
it was nearly quitting time. The title of the advertisement was “You can solve problems
. . . ” and on the blackboard was the initial value problem d
d—3:+y2+(2t+1)y+t2+t+1=0, y(l)=l. The author noticed it was a Riccati equation, and after brief contemplation, that y(t) = —t
is a solution. Using the technique above led to a short calculation and the answer
y(t) = —t + (Get — 1)‘1, C = 3/26, which blows up at t = ln(Ze/3) x 0.5945. No
reply was ever received to a letter sent to the company suggesting they should provide
more taxing problems to their prospective customers. The subject of periodic solutions will be discussed in a later section, but this seems
an excellent time to show a beautiful proof of a result for the Riccati equation, since it
involves a special property of the equation, similar to the crossratio property of linear
fractional transformations in conformal mapping theory. The proof can be found in the
book by V. Pliss. We are given solutions 93,(t), i = 1, 2, 3 of a Riccati equation, hence they satisfy ab, = p(t):c§ + q(t)x, + r(t), i: 1, 2, 3.
A simple computation gives M p(t)(x2(t) _ $1(t))> 933(t) — $29) — 5530) ‘ ac1(t) —
and now suppose that
(i) p(t),q(t),7‘(t) are periodic with minimum period T, and foTp(s)ds 75 O; for
simplicity let p(t) E 1.
(ii) xi(t),i = 1,2,3 are distinct Tperiodic solutions, and we can assume x1(t) <
.132 (t) < 332 (t) for all t. Integrate the cross ratio equation from to T and you obtain 0 on the left side and a
nonzero quantity on the left. We conclude that The Riccati equation with Tperiodic coefficients and foT p(s)ds 75 O can have
at most two Tperiodic solutions. The condition on p(t) is necessary; consider it = (sin t):ic2 which has an inﬁnite family
of solutions :c(t) = (c — cos t)‘1. Remark. We will adopt the terminology that a function :c(t) is Tperiodic, T > 0, if
:c(t + T) = :c(t) for all t, and T is the smallest such number. The beautiful result above is a speciﬁc case of a more general question—given a
polynomial differential equation of degree n with Tperiodic coefficients, :b(t) = :c" + a,,_1(t)ac"—1 + '   + a1(t)x + a0(t), 2 First Order Equations 31 what is the maximum number of T—periodic solutions it can have? For n = 2 we showed
the answer is two; it has been shown that for n = 3 the answer is three. But there exist
equations with n = 4 which have more than four Tperiodic solutions, and considerable research has been done on the general problem.
Sample control theory problem: Given the linear regulator problem
a':=a:+u, 33(0) = 1 and cost functional 1 1 1
C[u] = §k$(1)2 + 5/ [3m(t)2 + u(t)2]dt:
o
a) Find the feedback control law, the optimal output a:(t), and the value of C[u] for the case k = 3. b) Find the feedback control law for the case k = 0, and use numerical approximations
to find u(a:),a:(t), and CM. 6 Comparison Sometimes Helps This short section is merely to point out a very useful tool in the ODE mechanics
toolbox—the use of comparison results to obtain some immediate insight into the behavior
of a solution. These are often overlooked in introductory courses. The simplest and most often used comparison result is the following one: Given the functions f (t, as), g(t,a:), and h(t,a:), all satisfying the existence and
uniqueness criteria for the initial value problem in some neighborhood B of the point (t0,a:0). Suppose that
f(t,a:) < g(t,a:) < h(t,a:) for (t, as) in B. Then the solutions of the initial value problem i! = f(t, 11),!!(750) = moﬂ'ﬁ = g(ta$)v$(t0) = 950; z' = h(t, z), z(to) = $0,
satisfy the inequality
W) < 3«‘(t) < 2(t) for t > to and (t,a:) in B. A formal proof can be given, but the intuitive proof observing that all the solutions start
out at (to,a:0) and %% < ‘31—: < 53;: suffrces. One or both sides of the inequality can be used. Examples. a) For t 2 1,332 + 1 < 3:2 + t2 from which we can conclude that the solutions of
a': = 3:2 + t2 have ﬁnite escape time. A sharper estimate would be that 3:2 < 3:2 + t2 32 Ordinary Dittereniial Equations—A Briei Eclectic Tour for t > 0 so the solutions of % = m2 + t2, 20(0) = (no > 0 grow faster than the
solutions of % = 2:2, 27(0) = (to which become inﬁnite at t = l/z'o. b) Since
1+2:2 <x1/2+t2 <x+t2 for t > 0 and m 2 1 we conclude that the solution of :z': = 31/2 + t2, x(0) = 1, is
bounded for all time and satisﬁes the inequality 1+t+t2/3<m(t)<3et—t2—2t—2. c) Some students presented the author with a problem in ocean climate modeling
governed by the initial value problem i=¢>(t)—em4, m(0)=0, e>0, where ¢(t) was a periodic function too ugly to describe, but satisﬁed the inequality
0 S ¢(t) S W. This implies that —E.’II4 g ¢(t) — 611:4 _<_ k2 — 611:4, and since the solution of 2'0 2 —em4, 20(0) = 0 is m(t) E 0, and the equation
a': = k2 — as“ has two equilibrium solutions m(t) = :i: 4 k2/e, with the solution
satisfying 23(0) = 0 trapped in between, we can conclude that the solution of the IVP
is bounded for all t > 0. As the last example indicates, some comparison results are very handy, and a little analysis
can pay off before embarking on some numerical work. 7 Periodic Solutions A very deep question which occupies a sizeable part of the ODE literature is Given :17: f (t, as), where f and g; are continuous for —00 < t < 00,
a < m < b, and f is Tperiodic, does the equation have a T—periodic solution? By Tperiodic, recall that we mean f (t + T, m) = f (t, m) for all t, and T is the smallest
such number, e.g., sin 2t has period T = 27r/2 = 71' but obviously sin(2t + 271) = sin 2t.
Hence we are asking whether there exists one (or possibly many) solutions a:(t) satisfying :i:(t) = f(t,m(t)), a:(t + T) = $(t). The extension of the question to higher dimensional systems a: E R”, f = (fl, . .. , fn)
is an obvious one. For onedimensional, autonomous systems a‘: = f (as) it is obvious that they can’t have
any periodic solutions. An intuitive argument is that solutions move along a phase line
unidirectionally so they can’t return to a point they have passed. For higher dimensional
systems this is not true: it + a: = 0 is equivalent to the ﬁrst order system dc = y,
g) = —$, and every nontrivial solution is 27rperiodic. Therefore, we must consider
onedimensional, nonautonomous equations a': = f (t, or). 2 First Order Equations 33 Comment: with the current emphasis on dynamical systems, and the consequent
geometry of trajectories and ﬂows, plus the availability of excellent 2D or 3D graphics,
there occasionally seems to be an unwillingness to discuss nonautonomous systems. They
are sometimes avoided by increasing the dimension; for instance, the onedimensional
equation in = f (t, .73) becomes the twodimensional autonomous system :b = f (y, .73),
g = 1. Not a lot is gained except some nice graphs, and the author believes a study of
the onedimensional equation per sé gives a lot of insight into what happens at higher
dimensions. We start the discussion with the homogeneous linear equation :i: = a(t).7:; a(t) continuous, — 00 < t < oo; a(t + T) = a(t). (L) Since f (t, .73) = a(t):c and 35 are continuous for all t and :5, there is no loss of generality
in assuming the initial conditions .7:(0) : .730. Then the solution of (L) is T t
:c(t) = :50 exp a(3)ds] , and
0 by a change of limits of integration we get .7:(t + T) = $0 [/0t+T a(s)ds] = :50 exp /0T a(s)ds] exp /Tt+T a(s)ds]
= :50 exp l/OT a(s)ds] exp [/Ot a(s)ds] since (17¢ + T) = a(t). But the last expression will equal .7:(t) if and only if
exp [ 0 a(s)ds] = 1, and we conclude that all solutions of (L) are Tperiodic if and only if fOTa(s)ds = 0.
This is the same as saying a(t) has average value zero. Example. it = (sin t)x will have all 27rperiodic solutions whereas a‘: = (1 + cos t)a: or
:i: =  sint/2:c will have no 27rperiodic solutions. Now what about the nonhomogeneous equation?
is = a(t):c + b(t), a(t), b(t) continuous, — 00 < t < oo,
a(t + T) = a(t), b(t + T) = b(t). (L),
The following is true: (L)n will have a T—periodic solution for any b(t) if and only if (L) has no
nontrivial Tperiodic solutions, or equivalently jg a(s)ds 95 0. The key to the proof is the observation that if a solution exists satisfying the relation
.7:(0) = rc(T) then it must be periodic. For if .7:(t) is a solution, then y(t) = .7:(t + T) is
also a solution since a(t) and b(t) are Tperiodic. But 23(0) = .7:(T) implies 23(0) 2 31(0)
so by uniqueness .7:(t) = y(t) = .7:(t + T) for all t. 34 Ordinary Diﬁerential Equations—A Brief Eclectic Tour The proof of the assertion is to observe that the solution satisfying 20(0) 2 c is w) = cexp U; ands] + exp U; ands] [Qt exp [— [Os a(r)dr] b(s) ds. Then x(0) — x(T) = 0 implies that c 2 exp a(s)ds] fOT exp [ — fos a(r)dr]b(s)ds
1 7 exp [foT a(s)ds] which has a unique solution since the denominator is not equal to zero. Example. i = (1 + cos t)x + sint and since f02"(1 + cos t) dt = 271' a periodic solution
exists and 11(0) = x(27r) = c gives __ exp(27r) 02" exp(—s — sin 5) sin 5 ds 0 21088
_ 1 — exp(27r) N ' ' Numerically solving the IVP
it = (1 + cos t)x + sin t, 30(0) 2: —0.21088
gives x(27r) z —0.20947 which is good enough for a passing grade. If we change a(t) to sint in the last example, no 27rperiodic solution will exist. The
result for (L),, has generalizations to higher dimensional linear periodic systems. Turning to nonlinear equations there are no general results, but a useful one will be
given shortly to analyze a problem like the logistic equation with periodic harvesting j: = m (1 — — H(t),r > 0,K > 0 and H(t+T) = H(t).
We can assume H (t) = Ho + esin wt, representing a small periodic harvesting rate, or
H(t) = e¢(t) where ¢(t + T) : ¢(t). Recall that if H (t) E 0, there are two equilibria x(t) E 0 (unstable) and a:(t) E K
(stable), and if H (t) E H constant then H > rK / 4 implies the population expires in
ﬁnite time. So we might expect that if the periodic H (t) is small then the equilibria might
transmute into periodic solutions. Since the equation is a Riccati equation we know it
can have at most two Tperiodic solutions, which reinforces our intuition. The result we need, found in some 1966 notes by K. Friedrichs, is the following: Given f (t, 3:), satisfying the conditions for E and U and f (t+T, 3:) = f (t, 3:) for
all (75,30), suppose there exist constants a, b, with a < b such that f (t,a) > 0,
f (t,b) < 0 for all t. Then there exists a T—periodic solution m(t) satisfying
a:(0)=c,a<c<b. A picture almost tells the whole story: 2 First Order Equations 35 _\I‘I\"\Q\IT' x(T) The solution viewed as a map takes the interval {a S .7: S b,t = 0} continuously into
the interval {a S .7: S b,t = T}, so by the Brouwer ﬁxed point theorem there is a value
c satisfying x(0) = x(T) = c. The corresponding solution extended periodically is a
T—periodic solution. Now rewrite the logistic equation as ¢=m(1—%)'—H(t)=%[_<$_§)2+(§;_K1:I(t))] and assume 0 < H(t) < rK/4. Then K 1‘ K2 KH(t) rK
$‘3‘x‘E[T" r ]‘7_H(t)>0
1‘ K2 K2 KH(t)
III—K $—R::——4—+T— T :——H(t)<0 and we can conclude there is a (stable) Tperiodic solution x(t) with K /2 < x(t) < K.
Note also that 35:0: :i:=—H(t)<0 so if we change variables t to —t we can conclude there is an (unstable) Tperiodic
solution x(t) with 0 < x(t) < K/2. The reader might wish to generalize the result to the case where r(t) and K (t) are
Tperiodic and satisfy bounds like 0 < Tmin S Ta) S Tmax < 007 0 < Kmin S S Kmax < 007 36 Ordinary Diﬂerential Equations—A Brief Eclectic Tour and H (t) < 7'(t)K (t) /4, replacing the constant a with K (t) /2 and b with K (t). Or
consider the case of proportional harvesting H (t) = E(t)m, E(t + T) = E(t) and
0 < Emin S E“) S Emax < Tmin' Finally, a more general result which might be handy is the following one, proved by
AC. Lazer and the author in an article in Mathematics Magazine: Given m' = g($) +h(t), where g($) is a smooth function, and h(t) is Tperiodic,
then if g”(m) is either strictly positive or strictly negative for all m, there will
exist at most two T—periodic solutions. This veriﬁes the previous result for the Riccati equation but allows for more generality. Examples. a) The logistic equation with periodic harvesting has g(x) = 'r$(1 — m/K) and g” =
—27‘/ K < 0. b) A more exotic example is i 2 2e” — 2x + sinwt and g”($) = 2e—z > 0. But
f(t,$) satisﬁes 1 S f(t,0) S 3, and —2.264 S f(t, 1) g —O.264 so there is one
periodic solution $(t) with 0 < $(O) < 1. An examination of the direction ﬁeld
shows there can be no other. 8 DWmmmmmnmSMmhMasmmmn If we are given the NP dm
E = f(ta$) $(to) = $0
and a solution $(t), then we immediately know that
d$(t)
= t
dt tzto 071:0) and consequently a linear approximation to $(t) would be $(t) % $0 + f (to, $0)
(t — to). This is the basis of Euler’s Method of numerical approximation.
But if our tolerance for differentiation is high, we can use the chain rule to go further.
Since
d% _d _6fdm af_af 6f
F — Ef(ta$) — + a _ ﬁaaﬂﬂtﬂ) + 50,105 then all these can be formally computed and evaluated at (to,xo). This gives us an
exact value of the second derivative of .7:(t) at t = to and allows us to get a quadratic
approximation d2
w) 2 $0 + f(to,a:o)(t — to) + Eff (t — t0)2/2!
t=t0
where
(12$ _ 3f(t0,$0) 3f(to,$o)
m at. — ——am WOW + _6t ' 2:20 2 First Order Equations 37 One could proceed to get higher order approximations, but the differentiations will get
onerous in most cases. The technique is the basis for the Taylor series numerical methods; for n = 2 it might
be useful if you are stranded on a desert isle with only a legal pad, pen, and a ten dollar
KMart calculator. Example. 3—”; = 11%,.1(1) = 4, then and since
6f _ —t2 g_ 2t
az—2ﬁ(1+ﬁ2’ at 1+ﬁ’
weget
d2x _( —12 12 )+ 2(1) _ 71
dt2 :11 — 2¢Z(1+¢Z)2 1+¢Z 1+~/Z  108'
So ( )2
1 71 t—l
z(t)~4+§(t—1)+ﬁ 2! . An approximate value of $(2) is 4.66204 and a highly accurate numerical scheme gives
4.75474. Computing higher derivatives is more effective when used formally, and an example is
the logistic equation where f = 'rx(1 —— x/K Then dzx dfda: 2m: .7: 2 2.7: .7:
3—571? (“7) (“(1—3) ’" $(1’f)(1—E)
and we see that d% d22:
_ ‘f _ '
dt2>010<x<K/2, dt2<01fK/2<.z'<K
so the solutions are the familiar sigmoid shaped logistic curves with a change of concavity atrc=K/2. Furthermore, d—"‘ = rK/4 is the maximum growth rate, which we recall is also dt
$=K/2
the critical harvest rate when the population is being constantly harvested. We obtain 38 Ordinary Diﬂerential Equations—A Briel Eclectic Tour the plausible conclusion that the population will expire in ﬁnite time if the harvest rate
exceeds the maximum growth rate. The example is also important because it beautifully illustrates how just a little simple
qualitative analysis can give an excellent description of a solution, without resorting to
lengthy analytic or numerical techniques. Merely knowing the equilibrium points as = 0,
a: = K, noting that d1: / dt > 0 for 0 < an < K, and doing the simple concavity analysis
above gives us the logistic curve. Contrast this with the usual example or exercise found
in most textbooks to solve the (separable) logistic equation, which ends up ﬁnally with
an expression like K130 t = —— 0 =
1:( ) $0 + __ $0)e_1.ta 1:( ) 1:07
from which very little insight is immediately evident.
9 Dependence on Initial Conditions
Given the differential equation iii—f = f(t,a:) and two solutions 1:1(t), 1:2(t) satisfying x1(t0) = a, $2050) = b where a and b are close, what can we say about their proximity
for values of t > to? This is the problem of dependence on initial conditions, and to get
the needed estimate we will use one of the workhorse tools of nonlinear ODE’ers. Its
proof can be found in almost any advanced textbook and is not difﬁcult. Gronwall’s Lemma: If u(t) and v(t) are nonnegative continuous functions on
to S t < 00, and satisfy u(t) S a + /t v(s)u(s) ds, t 2 to, to where a is a nonnegative constant, then u(t) g aexp Mantis], t 2 to. We will assume f (t, 1:) satisﬁes the conditions for E and U, which implies that there
exists a constant K > 0 such that the Lipschitz condition, _f(try)i S Kim—yla is satisﬁed in some neighborhood of (to, 2:0). The constant K could be an upper bound Qt am , for instance. Then each solution satisﬁes on 13,(t) = 170 + /tf(s,x,(s)) ds, 2': 1,2, 0 where 270 = a for x1(t), and x0 = b for 232(t), and we are led to the estimate x1(t)— x2(t) : la— bl + / f(s,w1(s)) — f(s,w2(s)) d$ to 2 First Order Equations 39 Applying the Lipschitz condition gives
t
lx1(t) —a:2(t)l S la—bl+ le1(s)—x2(s)lds
to
and then Gronwall’s Lemma with a = la — bl, u(t) = lx1(t) — x2(t)l, and v(t) = K,
results in the needed estimate lx1(t) — x2(t)l 3 la — bleK(t_t°), t 2 to, which requires some commentary.
First of all, the estimate is a very rough one, but does lead us to the conclusion that if la — bl is small the solutions will be close initially. There is no guarantee they will be
intimately linked for t much greater than to, nor is the estimate necessarily the best one
for a particular case. Example. If la — bl = 10—2,t0 = 0, and K = 2 then
la:1(t) — x2(t)l 3 10—26% < 6
for t < 5110006); if e = § then t < 1.956. The exponential term in the estimate eventually takes its toll. The above argument and conclusion is not to be confused with a much emphasized
characteristic of chaotic systems—sensitive dependence on initial conditions. This means
that two solutions starting very close together will rapidly diverge from each other, and
later exhibit totally different behavior. The term “rapidly” in no way contradicts the
estimate given. If la — bl < 10‘6 and e = 10—4 then t < 2.3025 in the example above,
so “rapidly” requires that at least t > 2.3025. The important fact we can conclude is that solutions are continuous functions of their
initial values recognizing this is a local, not global, result. 10 Continuing Continuation The proof of the E and U Theorem depends on the method of successive approximations,
and some ﬁxed point theorem, e.g., the contraction mapping theorem. Given the IVP,
i = f (t, x), x(t0) = :00, let 130 (t) E .730, and compute (if possible) m1(t)=m0+/ f(s,.7:o(s)) ds, x2(t)=x0+/ f(s,a:1(s)) ds,..., t0 t0
t mn+1(t) = x0 +/ f(s:$n(s)) d3: to
etc. This is a theoretical construct; after a few iterations, the integrations usually are
colossal if not impossible. You want approximations? Use a numerical scheme. A general set of conditions under which the above successive approximations will
converge to a unique solution of the IVP are as follows. First, ﬁnd a big domain (open,
connected set) B in the txplane in which f (t, x) and 8 f (t, .12) / ('39: are continuous—the
proof is the same for a system where :1) E R". Next, construct a closed, bounded box 40 Ordinary Differential Equations—A Briei Eclectic Tour I‘ = {(t,1:)  t — t0 S a, 1t — ml 3 b} contained in B—you can make this as big as
you want as long as it stays in B. Since I‘ is closed and bounded, by the properties of continuity we know we can ﬁnd
positive numbers m and k such that f(t,a:) g m, 6f(t, 10/61:] S k for (t, 1:) in I‘. It
would be nice if our successive approximations converged to a solution deﬁned in all of
I‘, but except for the case where f (t, x) is linear this is usually not the case. To assure
convergence we need to find a number 1‘ > 0 satisfying the three constraints rga, er/m, and r<1/k. Then, the successive approximations 17,, (t), n = 0, 1, 2, . . . converge to a unique solution
x(t) of the integral equation $(t) = 1:0 +/t f(s,:1:(s))dx, for It — tel g r. Furthermore, a:(t) is a unique solution of the IVP.
But what could happen is that the tdimension of our rosyeyed choice of the box I‘
may have been considerably reduced to attain an interval on which a solution exists. 2b However, looking at the picture above, it appears that we can construct a new box P1 in
B with center (to + r,x(t0 + r)), and start a new IVP there. We would create a new
value 1‘1 and by glueing the two solutions together get a solution of our original IVP now
deﬁned for to — r g t S to + r + n. We could do some carpentry at the other end
(to — r,x(t0 — r)) as well. The continuation theorem says we can continue this process as long as we stay in B,
insure that f is bounded in B, and the right and lefthand limits of the solution exist and
are in B as we hit the right— and lefthand boundaries, respectively, of the box I‘. The
process continues and the solution is deﬁned for all t, or on some inﬁnite half interval,
or it becomes inﬁnite and fails to exist for some ﬁnite value of t. The discussion implies that associated with each initial value (to, 1:0), and the corre
sponding solution of the IVP, is a maximum interval of existence which is the largest
interval on which the solution is deﬁned. It can be thought of as the ultimate ﬁnale of
the continuation process described above. It is generally impossible to compute except
when the IVP can be explicitly solved. However, the successive approximation scheme applied to a linear equation has one
very nice consequence: 2 First Order Equations 41 Given $3 = a(t)z + b(t), where a(t) and b(t) are continuous for r1 < t < m,
then every solution is deﬁned for 11 < t < m. The result is true for higher dimensional linear systems as well. Example. a) % = ta: + sin t, then every solution is deﬁned for —00 < t < 00. b) 3—: = t :11, then solutions are deﬁned for —00 < t < —1, 0r —1 < t < 1, 0f 1 < t <_ 00; depending on the choice of to in the IC 1(t0) = 9:0. A class of separable equations for which solutions exist on an inﬁnite half line are {if = f(t)g(z), where f(t) and 9(1) are continuous, f(t) >0,togt<oo;g(a:)>0,0<a:<oo, and
m 1
lim —d1' = +00, any 9:0 > 0.
H00 2:, 90")
Then any solution z(t) with IC 1(a) = b, a Z to, b > 0, exists on a g t < 00.
The proof is by contradiction; assume there is a solution y(t) which cannot be continued
beyond t = T. Since f and g are both positive, y(t) must be strictly increasing, hence y(t) —> +00 as t —> T—. Separate variables to obtain the expression /y(t) 1 t
—d1' = f 5 ds,
yo to ( ) and as t —> T— the lefthand side becomes inﬁnite while the righthand side is ﬁnite. A
simple example of such an equation is iii—f = 9:“ f (t), where 0 < a S 1, and f (t) is any
positive, continuous function. To see the local nature of the E and U Theorem and the case of the “Incredible
Shrinking r” the reader may wish to follow along in this example: d9: 2
E—l—l—m, .7:(0)—0. This is the old workhorse with solution 1(t) = tant defined for —7r/ 2 < t < 7r/ 2, and it
becomes inﬁnite at the end points. But suppose we don’t know this and naively proceed
to develop a value of r, then do some continuation. Since f (t, z) = 1 + $2 is a nice function, and doesn’t depend on t, let’s choose a box
I‘ = {(t,z) I t S a, g b} with center (0,0). Then f(z)=1+z2gi+b2=m, ? =2zg2b=k
:12 and a unique solution is deﬁned for t g 1' where 1' mi (1 b 1
= n — .
’ 1 + b2’ 2b
Since max (Ebbz) = % at b = 1, hence 2%) = %, let a = %, b = 1, then r = 1/2, and we
would ﬁnd that a: = tan(1/2) ’=” 0.55. The solution is deﬁned for —1/2 3 t S 1/2. 42 Ordinary Diﬂerential Equations—'4 Brlet Eclectic Tour The important point is that no matter' how big an a > 1/2 and b > 1 we initially chose
we would get the same 0.55 and 1/2. At (1/2,tan % (1/2,0.55) we can construct a new box P1 = {(t,z) l [t— 1/2 S a,
:c — 0.55 g b} and get b 1
< . —_ __ .
r1 _ m1n(a, (b+ 055? + 1’ 2(b+ 0.55)) An analysis similar to the one above gives a z 0.30 and b m 1.14 and r m 0.30 so
our solution has been continued to the right and is guaranteed to exist for —0.5 S t S 0.5 + 0.3 = 0.8.
Suppose we staggered on and after countless steps our solution reached the point (1.4, tan 1.4) m (1.4,5.8), and is now deﬁned for —0.5 S t g 1.4. The box 1‘" would
be 1‘” = {(t,:z;)  t — 1.4 g a, z — 5.8 S b} and we get b 1
< ‘ —— — .
7' — mm (‘1’ (b+ 5.8)2 + 1’ 2(b+ 5.8))
This gives the miniscule value 7‘ 2 0.043, so we have only continued the solution to the interval —0.5 g t 3 1.443. We expect this since we are precariously close to 7r / 2 m 1.57
where a:(t) blows up, so our continuation process will inexorably come to a grinding halt. Insight not Numbers 1 Introduction The title of this very brief chapter is taken from a quote by the eminent mathematician,
Richard Hamming, who said “The purpose of computation is insight not numbers”.
In dealing with numerical analysis/methods in the introduction to ordinary differential
equations this precept should be constantly kept in mind. Too many books nowadays have
whole chapters or vast sections discussing a variety of numerical techniques and error
estimation, losing sight of the fact that the course is ordinary differential equations not
numerical analysis. Examine carefully for the insight issue any book entitled “Differential
Equations with . . .”—name your favorite mathematical software system. The emphasis on numerical methods is a little surprising since today’s computer
software packages all possess very powerful ODE initial value problem solving routines,
and even some hand calculators have the capability of approximating solutions to the IVP.
This is enhanced by accompanying graphical capabilities, able to do gorgeous direction
ﬁeld or phase plane plots and even 3D visualization. For anyone who long ago labored
on step by step Runge—Kutta calculations with pencil, paper and maybe a primitive
calculator, or spent a few hours drawing itty bitty arrows to sketch a direction ﬁeld,
today’s technology is heavensent. Consequently, the neophyte learner is best served by a brief introduction to some
elementary methods to understand how numerical schemes work, some practice using
them and the graphics, and later, frequent usage to analyze some meaningful problems or
questions. Some discussions have the student develop elaborate programs, forgetting that
the course is ordinary differential equations not computer science. 43 44 Ordinary Differential Equations—A Brief Eclectic Tour 2 Two Simple Schemes The simplest place to stan is with Euler’s Method followed by the Improved Euler’s
Method, sometimes called the Second Order Runge—Kutta or Heun’s Method. To
approximate the value w(b), b > a, of the solution a:(t) of the WP at = f(t, 3:), w(a) = me,
select a step size h = ("T", N a positive integer, then the two methods are expressed by
the simple “do loops”: Euler’s Method:
i0 = a
3:0 = $0
fornfromOtoN—l do
tn+1 = tn + h
zn+1 = w" + hf(tn,wn)
print t N (it better be b!) print an;
Improved Euler’s Method:
’ to = a
3:0 2 3:0 fornfromOtoN—ldo tn+l = tn+h “rt 2 wn + hf(tna$n) mm = as” + gm, can) + mm, uni print t N print a: N
They are easily understood routines, but it is surprising how some expositors can clutter
them up. Modiﬁcations can be added such as making a table of all the values gun z w(tn), n = 0, . . . ,N, or plotting them, or comparing the values or plots for different values of
the step size h. 3 Key Stuff
What are the important points or ideas to be presented at this time? Here are some: The Geometry: All numerical schemes essentially are derivative chasing or derivative
chasing/correcting/chasing schemes, remembering that we only know one true value of
d:(t), that for to = a where i‘(t0) = f(t0,a:0). For Euler’s Method we use that true 3 Insight not Numbers 45 value to compute an approximate solution value 9:1 % x(tl), then an approximate value
of :i:(t1) % f(t1,:L'1), then the next value 9:2 % x(tg), etc. slope f(t 1 , x1 ) slope f(t O , x0 ) For the Improved Euler Method we compute the approximate value of :i:(t1) 2 f (t1, 9:1)
using Euler’s Method, then use the average of f (to, 9:0) and f (t1, 0:1) to get a corrected
approximation of 9:1 % x(tl), then compute the new approximate value of :i:(t1) z
f(t1,:L'1), use that to get an approximate value of :i:(t2) z f(t2,a:2), average again and
correct, etc. The picture is: x(t l .—
sope2 (m0+ m1)   1 ft, =
newx1 sope(1xl) m1 Islope f(tO, x0 ) = [no A further nice insight is to note that for the case where the ODE is simply :i: = f (t),
x(a) = 9:0 then x(tl) = 9:0 + A1 where A1 is the area under the curve f(t), to g t 3
t1. Euler’s Method uses the lefthand endpoint rectangular approximation to that area,
whereas the Improved Euler Method uses the trapezoidal approximation, which is more
accurate. Cost: The Euler Method is a one step method requiring only one function evaluation for
each step, whereas the Improved Euler Method requires two. The classical Runge—Kutta
Method requires four. For problems where the solution is very wigeg or gets large, and
consequently a very small step size h and lots of function evaluations are needed to try
to obtain accurate approximationswthis comes with a cost. If that cost, which is really
a measure of how much number crunching must be done, is too great the little gnome
inside the computer might just trash the computation. 46 Ordinary Diﬂerential Equations—A Briel Eclectic Tour Error: The important error of a numerical method is the global error which determines
the order of the method, but almost equally as important is the local error. Suppose
we are given the IVP i = f (t, x), m(t0) = $0, have selected a step size h, and have
generated a sequence of points with the method: zo=$(to), $1%m(t1),...,$nmz(tn), tk+1=tk+h, k=0,1,...,n—1. The global error is the value [zn — z(tn), and the method is said to be of order r
if lxn — z(tn)] = 002’), which means there is a constant, dependent on things like
the initial value and the function f, such that Izn — z(tn) g M h’. For instance the
Euler Method is of order 1, the Improved Euler Method is of order 2, and the classic
Runge—Kutta is of order 4. The local error is defined in terms of one step of the iteration as follows: suppose the
numerical scheme has landed you at point (tj, :31.) where m, R: m(t,~). For the new initial
value problem i = f (t, i), ﬁtj) = zj compute one step of the method to get the point
(tj+1,ij+1). The local error is the value i(t,+1) — ij+1. A picture is helpful where
6g is the global error and 6g the local one: In general, if the global error is of order r the local error is of order r + 1.
The important points to be emphasized are i) If the numerical method is of order r, and we compute with step size h, then again with new step size 72 = h/m, where say m = 2,4,10,102,... ,1023 (your choice),
we expect the global error to be reduced by 1/m’. If mN m $(tn) for the new step
size then . . h r M T
on — men)! 3 MM lmphes ImN — Wu)! S M (a) = W”) ‘ The last remark has several implications. It is the basis of a reasonable test of the accuracy
of the calculations. Compute with step size h, then again with step size h/m where m
could equal 2, or 4, or 10, for instance. Decide beforehand the accuracy you need, in
terms of signiﬁcant ﬁgures, and compare the two answers. Use their difference to decide 3 Insight not Numbers 47 whether you need to recompute with a smaller step size or can stop and go have a latte.
The second implication is ii) Do not expect that if you keep reducing the step size you will get closer and closer to
the solution. The fiendish roundoff error may contaminate your results; that is usually
spotted by heartwarming convergence as the step size is successively reduced, then at
the next step the answers go wacky. Also do not expect that even with a very small
step size and a reasonable order method you will always get a close approximation.
The constant M in the error estimate could be huge, or the method, which computes
its own directions, may have gotten off the mark and is menin computing a solution
far removed from the actual one. 4 RKF Methods This section is not intended as a tutorial on RKF methods (the RK stand for Runge—Kutta,
the F for H. Fehlberg who ﬁrst developed the theory for the method), but to familiarize
the reader with the underpinnings of the method. It’s not a bad idea to know what the
little gnome inside the computer is up to. The foundation of the method is based on the strategy that by carefully controlling the
size of the local error, the global error will also be controlled. Furthermore, a general
procedure for estimating the size of the local error is to use two different numerical
procedures, with different orders; and compare their difference. Fehlberg was able to do
this using Runge—Kutta methods of order 4 and 5, so the local errors are respectively
0(h5) and 0(h6); his method involved six function evaluations at each step. This was incorporated into a computing scheme which allows for changing the step
size h by following this strategy: Selecting a step size h, estimate the relative (to h)
size of the local error by using the two method procedure described above; this gives a
number est.. Next, decide what is an allowable relative local error e, and now use the following criteria in computing (tk+1, n+1) given (tk, wk): i) If est. > e reject the mk+1 obtained, and compute a new n+1 using a smaller step
size. ii) If est. < 6, accept mk+1 and compute n+2 using step size h or a larger one. Of course, in classroom practice the parameters est., e, h and others are already built
into the RKF45 program so you will see only the computations. Some calculators use an
RKF23 program. The advantages of the RKF45 scheme besides its inherent accuracy are that when the
solution is smooth, and the approximations are closely tracking it, the step size selected
can be relatively large, which reduces the number of function evaluations (costl). But if
the solution is wiggly the step size can be reduced to better follow it—a ﬁxed step size
method could completely overshoot it. Furthermore, if the solution is too wild or ﬂies off in space the RKF methods have
a red ﬂag which goes up if est. < 6 cannot be attained. For schemes which allow
for adjustments, this requires a bigger value of e be given, otherwise for schemes with
preassigned parameters the little gnome will fold its tent and silently steal into the night. 48 Ordinary Diﬂerential Equations—A Briet Eclectic Tour Given the discussion above does the author believe that students should become facile
or even moderately familiar with Runge—Kutta methods of order 3 or 4?
Absolutely Not! They require hellishly long tedious calculations or programming because
of the number of function evaluations needed to go one step, and why bother when they
are available in a computer or calculator package. Use the Euler and Improved Euler
Methods to introduce the theory of numerical approximation of solutions of the IV P, and
the notion of error. Then give a brief explanation of RKF45, if that is what is available,
so they will have an idea of what is going on behind the computer screen. Then develop insight. 5 A Few Examples The ideal conﬁguration is to have a calculator or computer available to be able to do the
Euler and Improved Euler Methods, then have a numerical package like RKF45 to check
for accuracy and compute approximations far beyond the capacity of the two methods.
Start off with a few simple IVPs to gain familiarity with the methods. Comment: The author ﬁnds it hard to support treatments which present a large
number of problems like:
For the initial value problem i‘ = 21', 1(0) = 1 a) Use the Euler Method with h = ﬁg and 11—6 to approximate the value of b) Use the Improved Euler Method with h : 31, g, and % to approximate the
value of c) Find the solution and compute w(1),' then construct a table comparing the
answers in a) and b). This is mathematical scut work and will raise the obvious question “If we can
ﬁnd the exact solution why in blazes are we doing this?” Here are six sample problems—given these as a springboard and the admonitory tone of
this chapter the reader should be able to develop many more (insightful) ones. We have
used RKF45. Problem 1 (straightforward). Given the IV P a}: z+t, 113(0) =3.
Use the Euler Method with h = 0.05 and the Improved Euler Method with h = 0.1 to approximate then compare the differences with the answer obtained using RKF45.
Answers: Euler, h = 0.05 : w(1) 2 5.0882078
Improved Euler, h = 0.1: 1(1) 3 5.1083323
RKF45: w(1) w 5.1089027 As expected, Improved Euler gave a much better approximation with twice the step size.
The equation can be solved by letting :L' = u2 — t, but one obtains an implicit solution. 3 Insight not Numbers 49 The next three problems use the old veteran d: = 3:2 + t2, whose solutions grow much
faster than tan t. Problem 2. A strategy to estimate accuracy, when only a low order numerical method is
in your tool kit, is to compute the solution of an IVP using a step size h, then compute
again with step size h/ 2, then compare results for signiﬁcant ﬁgures. Do this for ¢=x2+t2, a:(0)=0 and approximate 33(1) using a step size h = 0.2 and h = 0.1 with the Improved Euler
Method.
Answer: h = 0.2: 33(1) % 0.356257 h = 01; 95(1) z 0.351830 At best, we can say a:(1) z 0.35. Problem 3. Given the threadbare computing capacity suggested in Problem 2 you can
improve your answer using an interpolation scheme. Given a second order scheme we
know that if :I:(T) is the exact answer and ash(T) is the approximate answer using step
size h, then a:(T) — ash(T) z Mh2.
a) Show this implies that
4 1 N §$h/2(T) — b) Use the result above and the approximations you obtained in Problem 2 to obtain an
improved estimate of 513(1), then compare it with that obtained using RKF45. a:(T) — :ch(T) % Mh2 Answer: a) 2
a:(T) — ash/2(T) w M 2% Mh2 } solve for a:(T) ‘ 4 1
b) 95(1) % 5(0351830) — 5(0.356257) = 0.350354
RKF45 gives 95(1) z 0.350232 Problem 4. For the [VP at = x2 + t2, 95(0) 2 0. Approximate the value of 95(2) with
the Euler and Improved Euler Methods and step sizes h = 0.1 and h = 0.01. Compare
(graphically if you wish) with the answer obtained with RKF45. Answer: Euler, h = 0.1: 33(2) z 5.8520996
Euler, h = 0.01: 33(2) z 23.392534
(Now you suspect something screwy is going on)
Improved Euler, h = 0.1 = $(2) e 23.420486
(Maybe you start feeling conﬁdent)
Improved Euler, h = 0.01: (13(2) z 143.913417 (Whoops!)
RKF45: 33(2) N 317.724004 50 Ordinary Ditterential Equations—A Briel Eclectic Tour The awesome power of RKF45 is manifest! The next problem uses the direction ﬁeld package of MAPLE, possessed of the ability
to weave solutions of initial value problems through the ﬁeld, to study the existence of
periodic solutions of a logistic equation with periodic harvesting. Problem 5. Given the logistic equation with periodic harvesting 1 a: 1 1
'=__ __ _ _ _' t
2: 223(1 4) (4+851n7r> a) Explain why it has a (stable) 2—periodic (T = 2) solution.
b) Such a solution must satisfy 35(0) = 55(2). Use MAPLE direction ﬁeld plots and RKF45 to ﬁnd an approximate value of 23(0) then graph the solution. Compute 23(4)
as a check. Discussion: a) Since g S i + ésimrt < 3 we see that for a: = 2 and all t, 3': =
% (ﬁ+§sin7rt) >0. Forx=4 and all t,a'c=0— (%+§sin7rt) < 0,
so by an argument given in Ch. 2 there exists a 2—peri0dic solution a:(t) with 2 < $(0) < 4. b) Construct a direction ﬁeld plot for 0 S t S 2, 2 g a: g 4, and select a few
initial conditions 33(0) = 2.4, 2.8, 3.2, 3.6, and plot those solutions. You
get something like :0 [:2 The solution with initial value 23(0) = 3.2 has terminal value x(2) > 3.2,
whereas the one with x(0) = 3.6 has 33(2) < 3.6, and it appears that
:c(0) for the periodic solution is near 3.4. Crank up RFK45: 23(0) 2 3.4,
x(2) = 3.4257, and now ﬁddle around or use bisection to get x(0) = 3454, 33(2) = 3.4536, x(4) = 3.4534. 1 Since for a: = 0 and all t, d: = 0 — (:11 + 5 sin 7n?) < 0 there is also an (unstable) periodic solution x(t) with 0 < 23(0) < 2. It will be difﬁcult to approximate
because the direction ﬁeld will veer away from it, but by letting t —i —t and
considering the equation , 1 a: 1 1,
x—§x(Z—1)+<Z+§smrt) one can use the previous strategy since now the periodic solution is stable. 3 Insight not Numbers 51 Problem 6. Using numerical methods examine the thickness of the boundary layer for the singularly perturbed differential equation at = —:c+ (1 +t), x(0) = 0, Discussion: 6 = 101,10—2,10—3. From the previous chapter we see that the solution x(t) z 1 + t — e for t > 0(6). The author used a calculator with an RK23 package and produced the following numbers for the IVP: l l
. = —— — 1 t =
:3 610+ 6( + ), 33(0) 6: 101 t :1: 0 0 0.02 0.1837
0.04 0.3376
0.06 0.4671
0.08 0.5767
0.1 0.6695
0.3 1.1568
0.5 1.3997
0.7 1.6000 6 = 10—2 t :1: 0 0
0.002 0.1815
0.004 0.3306
0.008 0.5537
0.01 0.6364
0.02 0.8770
0.04 1.0136
0.06 1.0495
0.08 1.0701
0.10 1.0901 6 = 10—3 t 0
0.001
0.002
0.004
0.006
0.008
0.01 :1: 0
0.6339
0.8673
0.9864
1.0048
1.0071
1.0093 If one takes the thickness of the boundary layers as the ﬁrst value t6 when x(t€) z 1 +t—e
one sees that they are approximately 0.5, 0.06, 0.006 respectively which supports the 0(6) assertion. b
a
1
w
A
4.
L *\
C an": WW3 CENTRAL uan ‘9
In. . Jam“. I
A , mm Second Order Equations This chapter could be titled “Second Order Equations—A Second Look,” because its
intent is not to dwell on the numerous recipes for solving constant coefficient linear
equations which stuff the chapters of most introductory texts. Rather, it is to give the
conceptual underpinnings for the general second order linear equation, offering some
alternative ways of presenting them, recognizing that the discussion is easily generalized
to higher order equations. Then a few little morsels are added which enhance some of the standard topics. 1 What Is The Initial Value Problem? We will write the general second order equation as i = f (t,:n, i"), so a solution will be
a twice differentiable function 33(t) satisfying = f(t, $(t), But we have only
the existence and uniqueness (E and U) theorem for the initial value problem (IVP) at
our disposal, so what is the TVP? Introduce a new dependent variable y = i: and the equation becomes a twodimensional
ﬁrst order system 3.72347 yzj:f(t7$li'):f(t’$?y)‘ The E and U Theorem in vector form applies: let a: = col(:n, y) then 5: 2 (mac, n) = m”) = mg) for which the initial conditions are w = (358:3) = = (:2) 53 54 Ordinary Differential Equations—A Brief Eclectic Tour We can conclude that the IVP for the second order equation is 5 = f(t727,2'7)7 2300) = $0, 1100) = 33050) = 1/0. The required continuity assumptions for E and U translate to the statement that the IVP
will have a unique solution if f, 6 f /8m, and 8 f /82‘c are continuous in some region
containing the point (to, $0, yo). An easy way to think of this is that we needed one “integration” to solve in = f (t, at),
so we get one constant of integration to be determined by the initial condition. For the
second order equation we need two “integrations” and get two constants. This is easily
seen with the simple example 3 4 £=t2=>23(t)=%+A=>x(t)=%+At+B, and more abstractly, using the integral equation representation, :i:=y=>x(t)=A+/ty(s)ds and y = f(t,m,y) => ya) = B + / f(s,x(s),y(s)) ds = B + /t f (s,A+ /sy(r)dr, 31(3)) ds. The above discussion readily generalizes to higher order equations, e.g., for n = 3
23(3) = f(t,$,:i:,
becomes
2': :11, 9:2, 2: f(t,.7:,y,z).
The IVP is
10(3) = f(ta$,ia5), $00) = $0, 33050) = 1/0, 53050) = 20, whose solution is a thrice differentiable function m(t) satisfying the initial conditions and
the relation m<3>(t> = m, at), new» in a neighborhood of (to, :30, ya, 20). But we are straying from the hallowed ground of
n = 2. IMPORTANT NOTICE If the following material is to be presented it is incumbent on the presenter to give
a few illuminating equations and their solutions, even if the audience isn’t quite
sure where they came from. Illumination is the stalwart guide to understanding. 4 Second Order Equations 55 2 The Linear Equation We could ﬁrst consider the general, nth order, linear, homogeneous equation
mm) + a1(t)x<n—1> +    + amen + an(t)x = 0, then later replace the 0 with q(t), a forcing term, to get the nonhomogeneous equation.
But we will tenaciously stick with the second order equation, n = 2, for several reasons: a. The theory for the case n = 2 carries over easily to higher orders. b. Higher order linear equations, except possibly for the case n = 4 (vibrating beam
problems), rarely occur in applications. One reason for this is simply Newton’s Law,
F = ma, in which the acceleration a is a second derivative. c. Higher order constant coefﬁcient equations require ﬁnding the roots of cubic or higher
order polynomials. Consequently, unless one wants numerical approximations, most
problems and examples are rigged. But factoring polynomials is nearly a lost art, and
even the quadratic formula is struggling a little to avoid becoming an endangered species. Hence sticking with n = 2 is a wiser course.
Therefore, we will discuss the IVP (*) i + “(01.0 + (70)”? = 0: «73(to) = «To, i700) = yo, where a(t),b(t) are continuous for r < t < s, and r < to < s, and —00 < 120, go < 00.
Since f(t,a:, = —b(t)a: — a(t):i:, then 3% = —b(t), af/aj: = —a(t) are continuous,
so the E and U theorem tells us the solution exists and is unique. A deeper and happy fact is that it will be deﬁned for all of r < t < .5.
Now comes a major fork in the road. One must admit that exact solutions of (*) can only be found when i) a(t) and b(t) are constants, ii) (*) is an Euler Equation, (1 b
i+¥i+t3w=q t7é0, a,bconstant, which is a constant coefﬁcient equation in disguise, or iii) the good fairy has provided us with a solution 121 (t) and we can use the method of
reduction of order: assume a second solution is of the form 1:2(t) = 1:1(t) ft u(s) ds
and substitute in (*): t
[1'22 + a(t);i:2 + b(t)a:2 = (5131/ uds + 2mm + $112) +a(t) (i1 /tuds+a:1u) +b(t) (:31 /tu d3) = 0, 56 Ordinary Diﬂerential Equations—A Brief Eclectic Tour since we assumed 232(t) is a solution. Now combine terms: t
(i1 + a(t)z’1 + b(t)z1)/ uds + (2231a + 93111 + a(t)z1u) = 0, and since :31 (t) is a solution the ﬁrst expression is zero and we obtain the ﬁrst order
linear equation for u z1(t)u + (2z'1(t) + a(t)z1(t))u = 0. If we can find the solution u(t),_and then evaluate its integral, we can ﬁnd 2:2(t),
but otherwise the method is pretty much of theoretical value only. Example. 513 + 41%: +,(4t2 + 2)a: = 0, 231 (t) = e_t2. Then u(t) satisﬁes
e_t2u + (— 4te_t2 + (4t)e_t2)u = 6% = 0,
hence u(t) = constant, say u(t) = 1 and 932(t) = (3"2 ft ds = rte—‘2. Remark. In many texts the method of reduction of order assumes 932(t) = z1(t)u(t),
which results instead in the same ﬁrst order linear equation, but for 12: z1(t)il + (2d:1(t) + a(t)m1(t))12 = 0. For all but a few other special equations, one must resort to constructing inﬁnite series
solutions, with all the agony of regular points, recurrence relations, regular singular points,
indicial equations, logarithmic cases, etc. This is a subject dear to the hearts of special
function aﬁcionados, but the neophyte can lose sight of the forest for the indices. More on series, later.
Does this mean one should pay a little obeisance to the notion of linear independence of functions, then go directly to the constant coefﬁcient case where all can be resolved
via the quadratic formula and factoring? The author believes not, and suggests that a
brief sojourn into the general theory of (*) will provide a much stronger foundation and '
greater understanding. Two approaches suggest themselves. The ﬁrst is to deal directly with the structure of
the solutions of The second is to discuss the general twodimensional linear system ri‘ = a(t)w + b(t)y, g] = C(t):l: + d(t)y and develop brieﬂy the underlying theory, then note that by letting a(t) = 0, b(t) = 1 we
obtain (*) in the form r = y, y = c(t)x + d(t)y :> i — d(t)$ — c(t)a: = 0. This approach is a higher level of exposition, but has the advantage of simultaneously
taking care of the two—dimensional system and the important case where a(t), b(t), c(t),
and d(t) are all constants. This case is essential to the study of stability of equilibria of
almost linear systems. For both approaches the analysis of the nonhomogeneous system,
with 0 replaced by q(t) in (*), will be deferred until later in the chapter. 4 Second Order Equations 57 3 Approach 1—Dealing Directly Linearity is what makes everything tick, and draws on the analogous analysis of algebraic
linear systems. Given any solutions :51 (t), $2(t), . . . ,xk(t) of the second order equation
then any linear combination I:
$(t) = Z OWN)
1 is a solution by rearrangement and linearity of differentiation 1:
at) + awe) + bane) = Z cj (iv(t) + a(t)é:j(t) + bum(n) and the last expression is zero since each raj (t) is a solution.
Linearity will help us answer the question—how many solutions do we need to solve the IVP?
i) If x0 2 yo = 0 then a:(t) E 0 is the only solution—this is an important fact. ii) Given one solution $(t) with $(t0) = a aé 0, then if $0 aé 0 the solution x1(t) =
Eduard) will satisfy x1(t0) = $0. But ai:1(t0) = Efﬂto) will not equal yo unless
d:(to) = £3110, which would be fortuitous. If yo = 0 then i:(t0) must also. A similar
analysis can be given for the case yo = b aé 0. Example. d3 — 6i: + 5:1: = 0.
A solution is $1(t) = 6’ since 6’ — Get + Set = 0,
and) = 65t since 256’ — 30é5‘ + 565’ = 0.
If to = 0
x1(t) satisﬁes a;(0) = 1, = 1
1132(t) satisﬁes 113(0) = 1, = 5.
Neither would solve the IVP with 33(0) = 2, = —2, for instance. The intuitive analysis given above suggests we will need more than one solution.
We can therefore appeal to linearity and ask if given two solutions $1 (t) and $2(t) can
we ﬁnd constants c1 and cz so that $(t) = c1231 (t) + 022:2 (t) satisﬁes (13050) = 011131050) + 021132050) = $0,
d:(to) = 013751050) + 0237:2050) = 1/0 The constants c1, 02 must be unique by uniqueness of the solution of the IVP, and from
linear algebra we conclude that $1 (t), $2 (t) must satisfy det ($100) MW) at 0. $1050.) 352030) 58 Ordinary Diﬁerential Equations—A Briel Eclectic Tour But we have set our sights too low—we want to be able to solve' any initial value
problem for any to, so the goal is to ﬁnd two solutions satisfying mm mm
in“) $2(t));é0, r<t<s. The determinant is called the Wronskian, and we will just denote it by Observe
that W($1, $2)(t) = det ( = dit($1.’i‘2 — (1:22.31) = $1.132 + $13132 — {1.32.131 — $25131
= m1(—a(t):i:2 — b(t).7:2) — x2(—a(t):i:1 — b(t).7:1)
= —a(t)($1a’:2 — $2931) = —a(t)W(t); this is a simple version of Abel’s Formula. Its importance is that it implies W®=Wmanx¥wky from which we conclude The Wronskian of two solutions is identically zero (W(to) = 0 for some to) or
it is never zero. But if W(t) is never zero that means $1(t) and $2(t) can be conjoined to solve any
IVP, and we can say 9:1(t) and 9:2(t) are a fundamental pair of solutions.
Before adding more facts, we must ask the fundamental question, begging for an answer.
Does a fundamental pair of solutions exist? Yes, because the solution of the IVP is unique so we can choose
$1(t) satisfying $1050) = 1: $1030) = 0a
9:2(t) satisfying 932050) 0a 9.32030) = 1 Then W(t0) = det ( (1) ‘1’) = 1 51$ 0 so W(t) is never zero and the solution of the IVP is uo=mmm+mmm which can be easily veriﬁed. Slick! But are we stuck with the one fundamental pair of solutions we just found? No, we
can find countless others by appealing to a little linear algebra. Let A '= (‘cl 3) be any
nonsingular matrix, so det A 51$ 0, and given a fundamental pair of solutions m1(t), 2520?),
then 6 $1“) $2(t) _ e ax1(t)+c:c2(t) b$1(t)+d.’l,‘2(t)
d t KM) mm) Al ‘ d t(a¢1<t>+c¢2<t> b¢1<t>+d¢2<t>) : det(W(t)A) = det W(t) det A 51$ 0. By linearity the linear combinations $3(t) = am1(t)+cm2(t), 9:4(t) = b93101) +dm2(t) are
solutions, and since their Wronskian is not equal to zero, they are another fundamental pair 4 Second Order Equations 59 of solutions. Therefore, the initial value problem can be solved with a linear combination
of 9:303) and .774 Example. In the previous example 9:1(t) = et, 9:2(t) = e5t and et e5t W(t) —_— det (at 5651,) : 4e“ aé 0 for all t, so 9:1 (t) and 932 (t) are a fundamental pair. Therefore every solution can be written as :1:(t) = cle1t + 0265t. If to = 0 and the IC are :1:(0) = 4, = 1 we would solve 01 + 02 = 4, cl + 502 = 1
to obtain 01 = 19/4, C2 = —3/4.
We can find many other fundamental pairs, e.g. (1:3(t) = 36’ — 2e“, (1:4(t) = 7et + 36“
but one simple pair is enough. The above analysis can be easily carried over to the nth order linear equation, and we
can conclude that the space of solutions is an ndimensional linear space spanned by any
set of n fundamental solutions—we have shown this directly for n = 2. Any fundamental set of n solutions 9:1(t), 932 (t), . . . 7:1:,,(t) will satisfy
(1:1(t) . . . 93,,(t)
931(t) . . . :isn(t)
W(t) = det : . aé 0
x(”_1)(t) ... mgr—DU) and if the differential equation is
as“) + a1(t)m(”‘1)(t) +    + an_1(t)a‘c + an(t)m = 0 it can even be shown, using the theory of linear systems, that W(t) = W(to)exp [— a1(s)ds] , hence W(t) is identically zero or never zero. But all the salient facts are covered by
limiting the argument to n = 2, and a lot of blackboard space is saved as well. The sharpeyed knowledgeable reader will surely have noted that the author has avoided
the notion of linear independence of functions and solutions. It is there if one appeals to
the fact that the column vectors of any nonsingular matrix are linearly independent. For a fundamental pair this means
9010‘) 902(t) 2 0 _ _
61(x1(t))+02(¢2(t> — o “1‘02” 019:1(t) + 021:2(t); 0 <=> 01 = 02 = 0, hence and this is true for r < t < s, which means 9:1(t) and 932 (t) are linearly independent. 60 Ordinary Diﬂerential Equations—A Brief Eclectic Tour However, the important distinction is that for a fundamental set of solutions, the
Wronskian necessary and sufﬁcient condition applies: W(t) 7E 0 <=> a fundamental set <=> linear independence. This is not necessarily the case for functions which are not solutions of a linear ODE.
The oft quoted example is 1105) = t3, x2 (t) 2 HP, they are linearly independent on any interval —r < t <
1', but W(t)—det t3 “'3 —o
_ 3t2 3tt _ ' Both satisfy x(0) = = 0 so they could not be solutions of any second order
linear equation. What is being suggested is that the emphasis be put on ﬁnding fundamental sets of
solutions, which will automatically be linearly independent in view of the Wronskian
test, and not bother too much with linear independence, per sé. Too many introductory
texts spend excessive time on linear independence, and remember, the subject is ordinary differential equations not algebra. 4 Approach 2—The Linear System For this treatment all that is needed is a little knowledge of matrixvector multiplication,
which in the 2dimensional case can be easily learned if needed. We consider the
twodimensional, linear, homogeneous system of differential equations 3 3531:3833 °r (3) = (283 223) (Z) If we let a; be the vector col(x, y) and A(t) be the matrix and the IV P becomes at = A(t)x, $(t0) = x0 = ($0) ,
~ ~ ~ ~ yo (IVP) or
d; = a(t)a; + b(t)y, x(t0) : x0
1) = C(t)w + 61(15):}, We) = yo.
If a(t), b(t), C(t), d(t) are continuous on r < t < s, and r < to < s, then the solution of the IVP will exist and be unique for any —00 < we, go < 00. Since the system is linear
we get the added beneﬁt that the solution is deﬁned for all of r < t < s. 4 Second Order Equations 61 We will use the matrix A(t) some of the time to avoid writing down the full system,
so a solution of the IVP is a vector function col(:c(t), satisfying G8)=M068) G$D=(3) Example. d (A(t) = é ( ett) Since ﬁw) = 2(et) + (69 E; —et) = 3(et) + 4(—e‘) and if to = 0 it satisﬁes the initial conditions x(0) = 1, y(0) = —1. Another solution is
d e5t . E(€5t) = 2(65t) + (365”)
9’2“) 2 ya) = 3e“ 5m“ d 5t 5t 5
N E(36 )= 3(e )+4(36 t);
it satisﬁes $(0) = 1, y(0) = 3.
We leave the reader to verify linearity: if <p1(t), . .. ,cpn(t) are solutions then <p(t) = n
gcjfj (t) is a solution for any constants c1, . .. ,cn. Note that if (£31 (t) = A(t)<5j then ﬁatgym) = cjgjm = waft(t) = A(t)(0jfj(t)); interchanging cj and A(t) is okay since cj is a scalar.
To satisfy the IVP we need at least two solutions since c ($(t0)) = ($0) only if c = $0 ,
31050) 210 $050)
which then must require that yo = ﬂag/(t0). Therefore, given two solutions
(61 ($2 t = , t = ,
‘5“ ) (t) ‘5“ ) gm) we must be able to ﬁnd two constants c1, c2 such that a:(t) = C1901 (t) + C2902 (t) satisﬁes 2<to>= (:3) or a (35:83?) (:82?) = (:3) But this is equivalent to c1$1(t0)+C2$2(t0)=$0 or (mote) $200)) (c1) = 0191(t0)+02y2(t0)=y0 211050) 212050) 02 62 , Ordinary Differential Equations—A Brief Eclectic Tour From uniqueness of the solution of the IVP, the constants 01 and C2 are unique, and since
we have the lofty goal of solving the IVP for any choice of to, we want to be able to
. solve the above system of equations for any t = to. Consequently we want to ﬁnd two solutions satisfying W(<£1,<£2)(t) = det Ede 0. In case the reader skipped over the ﬁrst approach, the determinant is called the Wronskian
and we will simply denote it by Observe that d d _ . . .
d—tW(t) = E03110 — 9321/1) = 9311/2 + 9311/2 — 9321/1 — 9321/1, and if one substitutes the expressions for :i:1 and 311, 9'32 and Q2 then combines terms one
obtains 52mg = [a(t) + d(t)](a:1y2 — mm) = [a(t) ’+ d(t)] W(t). It follows that W(t) = Wag) exp [ t:(a(s) + dams] = Wag) exp [ft tr A(s)ds] to which is Abel’s Formula. Recall that the trace of a square matrix A is the sum of its
main diagonal terms, and is denoted by tr A. Therefore, W(t) is identically zero when W(t0) = 0 for some to, or it is never zero,
in which case a linear combination 01901 (t) + 02902 (t) will solve any IVP. The pair (p1 (t), 902 (t) are called a ﬁmdamental pair of solutions, and we can infer that the solution space of :i: = A(t)x is a linear space of dimension 2. Example. In the previous example where A = i) let
6t 65t
«310) = , «329) =  6t 5t t
W03) = det ( t 3:51,) = 4th = 4exp [/0 tr A(s)ds] —6 Then so we conclude that they are a fundamental pair, and every solution of a‘: = An: can be e" 65" x(t)
5“) ‘ Cl +02 ‘ (ye)
where .7:(t) = cle" + c265”, y(t) = —cle’ + 30265”. If to = 0 and the IC were 33(0) = 4,
y(0) = 1, we would solve 01 + 02 = 4, —c1 + 302 = 1 to obtain 01 = 11/4, 02 = 5/4. written as 4 Second Order Equations 63 To prove the existence of a fundamental pair we proceed as before. Let (p1(t) and (p2 (t) be solutions satisfying the initial conditions <1) (W) <°>
t = = , t = :
951(0) (to) 0 95x 0) We) 1
then W(to) = det((1J ‘1’) = 1 76 0 so W(t) is never zero, hence (p1(t) and (p2(t) are a
fundamental pair. The solution of the [VP N N
at = Amos, $(to) = (“m
~ ~ ~ 210 Countless other pairs of fundamental solutions can be found by noting that given a
fundamental pair (p1(t), (p2(t) and any nonsingular matrix A = (‘0‘ 3) then 2:1(t) 2:2(t) _ e aml(t)+cm2(t) ba:1(t)+dm2(t)
det [(2110) 212(0) A] _ d t (ay1(t)+ cy2 (t) byi (t) + dy2(t)) = det(W(t)A) = det W(t) detA 76 0 > is a:(t) = 2:081 (t) + 11/0826). and since 031 (15)) (932 (15)) ($1 (15)) ($2 (15))
t = a + c t = b + d
f?" ) (mt) mt) ’ “54‘ ) mt) mt)
are solutions by linearity, we have created another fundamental pair.
All of the above analysis carries over to higher dimensional linear systems. The question
of linear independence of a set of fundamental solutions is not discussed, and the reader
is referred to the ﬁnal paragraphs of the preceding section, and to linear algebra theory. Now we can immediately attack the second order equation 573(t) + a(t)d: + b(t)a: = 0
by noting that it is equivalent to the twodimensional system iguana—awn °‘ =(—I?<t> —al<t>) Then a fundamental pair of solutions w = (:33) = (:18), em = (15:83) = (:8), exists for the system, which is equivalent to a fundamental pairof solutions 9:1(t), 2:2 (t)
existing for the second order equation. Their Wronskian is W(t) = det (MUN/205) “ “(0211(0)
= det (231(t)m'2(t) — x2(t):i:1(t)) = W(to)exp [— /t a(s) d3] since tr A(t) = —a(t). Given any fundamental pair 9:1(t), 3:2(t) another fundamental
pair is 933(t) = ax1(t) + cw2(t), 9:4(t) = bx1(t) + dx2(t), 64 Ordinary Diﬂerential Equations—A Briet Eclectic Tour where ad — bc 76 0. Every solution .1:(t) can be expressed as a linear combination
.1:(t) = clxl(t) + 021:2 (t), where x1(t), x2(t) are a fundamental pair. 5 Comments on the Two Approaches The future presenter of the introduction to the subject may have by now tossed this book
on the pile intended for the next library book sale, mumbling about its unsuitability,
and returning to the somewhat mindnumbing parade of linear independence, constant
coefﬁcient second order equations, constant coefﬁcient higher order equations, constant
coefﬁcient linear systems (sometimes with a big dose of linear algebra), etc. The problem will be that in a one semester course, this approach will likely leave little
time to develop the most important topics, which would certainly include qualitative theory
and stability, possibly Hamiltonian systems, the phase plane (n = 2!), possibly boundary
value problems, and maybe even chaos. If the audience is engineers then knowledge
of the Laplace transform is usually required—fortunately there is now good software to
avoid the tedious partial fractions. Most of today’s introductory texts are replete with
mathematical models, many of which are worth exploring, but such explorations beneﬁt
from a thorough grounding in the theoretical tools, which are really quite few. Time management will be the problem, and the author’s suggested approaches—felling
two birds with one Wronskian?—may help to relieve the problem, and be able to move
faster into the richer plateaus of ordinary differential equations. 6 Constant Coetticient Equations—The Homogeneous Case At last—we can get our hands on some real solutions! True enough, and it depends on
the simple fact that .1:(t) = e’“ is a solution of .i': + ad: + bx = 0, a, b real constants,
if and only if /\ is a root of the characteristic polynomial
p(/\) =/\2+a/\+b=0
since (11:5(9‘) + a%(e’\t) + beM = e’“p(/\). The various cases are: a. p(/\) has two distinct real roots A1, A2, A1 7E A2. Then $1 (t) = e’\1‘t and x2(t) = e’\2‘t
are solutions and Alt Azt e e = det (Alexlt A26A2t ) = (A2 — AneWW‘ aé 0, so they are a fundamental pair. b. p(/\) has double root /\1, so 11:1 (t) = e’wt is one solution—what is the other? One can
guess $2 (t) = te’m and ﬁnd out that it works—deﬁnitely a time saving approach! But
this is a nice time to reintroduce reduction of order, an important tool. First note that if A1 is a double root of p(/\) then from the quadratic formula A1 = —% and b = §. 4 Second Order Equations 65 Now let 332(t) = 6"” ft u(s)ds and substitute it into the differential equation to get, after canceling the 6‘”, 11+ (2A1 +a)u = 0. This implies ’[L = 0 so u(t) = A, and 332(t) = Ate’qt + Be“. The second term is
our original 331(t), and since A is arbitrary we conclude that 332(t) = te’\1‘ is the second
solution. Then eklt teklt ~ = det (Ale/\U‘. eklt + Altexlt ) = e2A1t # 0, and therefore 9:1 (t) and 932(t) are a fundamental pair.
c. p(/\) has complex roots, and since a, b are real they must be complex conjugate pairs
/\1 = r+i0, A2 = r— 2'0, where i2 = —1. Even if the neophyte has not studied complex variables, it is essential that an important
formula be introduced at this point: em = cos0 + isin 0.
It Can be taken for granted, or motivated by using the inﬁnite series for cos 0 and sin 0,
and the multiplication rules for products of 2'. Besides, it leads to the wonderful relation
ei7r + 1 = 0, if 0 = 7r, wherein e,z‘,7r,0 and 1 share in a fascinating concatenation.
Letting 0 —> —0 gives ‘ 6—29 = cos0 — isin0 and you obtain the key formulas eta + 6—29 eta _ e—i0 cos0=——, sin0=—_ 2 22 The law of exponents holds so 331(t) = exit = e(’+i9)t = ertem, and 9:2(t) = e’\2‘ =
(ah—"9” = eerie—m, are solutions, and any linear combination a:(t) = 013:1 (t) + 023:2(t) is also a solution. Nothing about linearity prohibits cl and 02 being complex numbers so ﬁrst let cl = C2 = %: m1“) = _erte1.0t + _erte zOt = ert : ert COS 9t, 2 2 2
a real solution! Now let c1 = ﬁ, 02 = —% and obtain
1 _ 1 . eiﬂt _ e—iﬂt I
1.2“) = 2_ie2t620t _ gene—wt = ert ___ ert Sln at,
another real solution! Now compute their Wronskian W ( t) _ 6" cos 0t 6” sin 0t
_ re" cos 0t — 0e” sin 0t re” sin 0t + 06” cos 02 = 062"(cos2 0t + sin2 0t) = 0e" 75 0 since 0 76 0; they are a fundamental pair. 66 Ordinary Differential Equations—A Briel Eclectic Tour In these few pages we have covered everything the beginner needs to know about
linear, homogeneous (no forcing term), constant coefficient equations, which should be
reinforced with some practice. Further practice should come from doing some applied problems. Remark. The Euler Equation i+§¢+t32z=d tgéo, should be brieﬂy mentioned, since it is a camouﬂaged linear equation and does come up
in some partial differential equations, via separation of variables. It also provides absurdly
complicated variation of parameters problems for malevolent instructors. Solution tech
nique: let :c(t) = t", substitute to get a characteristic polynomial p(/\) = A2+(a—1)/\+b,
then work out the three possible cases as above. Remember, the Euler Equation is one of the few explicitly solvable ﬂy specks in the
universe of nonconstant coefﬁcient linear differential equations. Do you want to venture beyond n = 2? You do so at your own risk in view of the
fact the majority of high school algebra texts have shelved factoring polynomials in the
same musty bin they stored finding square roots. Of course your friendly computer algebra
program can do it for you but what is being learned? And then you will have to wave your
hands and assert the resulting solutions form a fundamental set of solutions because they
are linearly independent, unless you want to demonstrate some sleepinducing analysis. Example 1. Wish to prove eAlt,e’\2t, . .. ,eAnt, A. distinct, are linearly independent?
You could write down their Wronskian W(t), then
1 . . . 1
/\1 An
W(0) = det
NIH Ag—l which is Vandermonde’s determinant (Whoa!) and is never zero if the A. are distinct. Or
suppose are)“ + 028)“ + ' ~ ' + one)” = 0, ‘00 < t < 00’ and the or are not all zero. Multiply by e‘Alt, differentiate, then multiply by (Yul—Mt, differentiate eventually get something like Acne()‘"‘)‘"1)t E 0 where A depends
on all the differences and is not zero so 0,, = 0. Now repeat to show cn_1 = 0, etc.,
making sure to drop a book on the ﬂoor once in awhile. Example 2. It gets worse when you have multiple roots /\j so 63"”, teAit,...,
tkeAJ't are solutions if /\j has multiplicity k. A linear combination of all the solutions
being identically zero will produce an expression like P1(t)e’\1t +P2(t)e)‘2t + ' ‘ ' +Pn(t)e)‘"t = 07 ‘00 < t < 00a where the 12.05) are polynomials. Multiply by e‘Alt, and differentiate enough times to
make p1(t) disappear. Now multiply by e_(’\2_)‘1)t, differentiate some more, . . . . Maybe
you are a little more convinced of the wisdom of sticking with n = 2. 4 Second Order Equations 57 7 What To Do With Solutions To start with an elementary, but very useful, fact, it is important to be able to construct
and use the phase—amplitude representation of a periodic sum: If ¢(t) = asinwt + bcoswt then ¢(t) = Acos(wt — ¢) where A = x/a2 + b2
is the amplitude and (:5 = tan‘1 a/ b is the phase angle or phase shift. This is trivially proved by multiplying the first expression by Va2 + ()2 /\/ a2 + b2 and
using some trigonometry. The representation makes for a quick and illuminating graphing
technique: Example. ¢(t) = 3sin 2t + 5cos 2t so A = x/gz and as = tan‘1 3/5 x 0.54 rad., hence
¢(t) = \/3_4005(2t — 0.54), and 2t — 0.54 = 0 gives t = 0.27. a}? We will shortly see the effective use of the representation in studying stability.
There are three popular models from mechanics used to discuss undamped or damped
motion; we ﬁrst consider the former: (iii) x small (i) is a suspended mass on a spring, (ii) is the mass connected to a spring moving on a
track, and (iii) is a bob or pendulum where a: is a small angular displacement from the
rest position. In (i) L is the length the spring is stretched when the mass is attached. All
share the same equation of motion {75 + w% = O, the harmonic oscillator, where w2 2 g/L in (i) and (iii), and w2 = k, the spring constant in (ii). In the case of
the pendulum the equation of motion is an approximation for small displacement to the
true equation, it + 0.12 sin :7: = 0. Most problems are easy to solve except for the penchant
some authors have for mixing units—the length is in meters, the mass is in drams, and it
takes place on Jupiter. Stick with the CGS or MKS system! 68 Ordinary Diﬂercntial Equations—A Brief Eclectic Tour If there is a displacement 51:0 from rest a: = 0, or an initial velocity yo imparted to the
mass, then the IVP is £15 + M211: 2 0, 55(0) = $0,§3(0) = 310 whose solution is 5’3“) = 350—0 Sim” + $0 coswt = V5133 + yg/w2 cos(wt — us) where 95 = tan‘1 yo/wmo. The phaseamplitude expression gives us a hint of the phase plane as follows: , = —w‘/$l:g + yﬁ/w2 sin(wt — gt) (‘ic(t))2 W2 and therefore 56(02 + = (153+ Jag/£02) Or if we plot a:(t) vs. 2 y(t) then we y(t>2 _ 1
x3 + ya/w2 + mama + ya ‘ ’
which is an ellipse. It is the parametric representation of all solutions satisfying a:(to) 2
51:0, a':(to) 2 yo for some to. Furthermore, we see from the phaseamplitude representation
of the solution that a change in the initial time is reﬂected in a change in the phase angle
of the solution, not the amplitude. ' If we add damping to the model, which we can assume is due to air resistance or friction
(in the case of the pendulum at the pivot), and furthermore make the simple assumption
that the dissipation of energy is proportional to the velocity and is independent of the
displacement, we obtain the equation of damped motion fri+ctb+w2a3=0, c>0. Now things get more interesting; the standard picture demonstrating the model might add
a dashpot to represent the frictional force: Models of RLC circuits can also be developed which lead to the same equation, but since
the author doesn’t know an ohm from an oleaster they were omitted. The motion depends on the nature of the roots of the characteristic polynomial—you
could imagine it as the case where the cylinders of your Harley Davidson’s shock absorbers
were ﬁlled with Perrier, Mazola, or Elmer’s Glue. The polynomial is p()\) = A2 +cA+w2
with roots 1 2
A1,2=§(—ci\/C2—4w2)=§(—lﬂ: l—ﬂ). 4 Second Order Equations 69 Case 1. Overdamped, 0 < 4—2”; < 1.
Both roots are distinct and negative: /\1 = —r1, A2 = ~r2, and $(t) = (Ia—’1‘ + be’rzt,
r1, m > 0, where a and b are determined by the IC. If n < m then [WM 5 [ale—“t + lblewt S (lal + “’06—?” which goes to zero as t —> 00. The solutions will look like this: X xo Remark. Most plots depicting this case look like the one above, but the solution can
have a small bump before plummeting to zero. Look at :1:(t) = 3e“ — 2e‘2‘. Case 2. Critically damped, 4—5; = 1.
There is one double root /\ = —c/ 2 < 0 and the solution satisfying :1:(0) = $0, 13(0) = yo
is $05) = woe—C“2 + (yo + cwo/Z)te_°‘/2. If mo, yo are positive it will have a maximum at t = tC = % > 0 then decrease
to zero. Since the size of the maximum could be of interest in the design of the damping
mechanism, assigned problems should be to ﬁnd the maximum value of a:(t) and when it occurs. Case 3. Underdamped, 4—5; > 1.
Now the roots are —§ d: i0, 0 = ‘l 4—2”; — 1, so the solution will be :1:(t) = (Ia—“ﬂ sin 0t + bra—“V2 cos 0t
= Asa“ﬂ cos(0t — ¢) where the maximum amplitude A and the phase angle are determined by the initial
conditions. A possible graph would be:  7] Ordinary Differential Equations—A Brief Eclectic Tour The dashed graph is
i A(cos ¢)e_cv2 The interesting problems for the underdamped case are to estimate a ﬁrst time beyond
which the amplitude is less than a preassigned small value for all later times. These
problems afford a nice combination of analysis and the advantages of today’s calculator or computer graphing capabilities. Examples. l. For the system governed by the IVP 1
.i'+ §¢+x=0, 113(0) =2, :i:(0) =0,
accurately estimate the smallest value of T for which lx(t)l < 0.5 for all t 2 T.
Discussion: This can be done solely by using a graphing calculator or a plotting package, but doing some analysis ﬁrst has beneﬁts. The solution is V255
16 w) z 2.0039ei/16 cos < t — 0.0626) , and since cos 0 has local extrema at 0 = mr, n = 0, 1, 2, . . . , ﬁrst solve J%t«0.0626 = mr to get tn = 16 M23626 . Then we want x(tn) = 2.0039 exp(—tn) < 0.5 which gives a value of n E 7. Check some values: n = 7, tn a: 22.097, :c(t7) % —0.5036
n = 8, tn z 25.245, 23(t3) a 0.4136
n = 9, t9 z 28.392, w(t9) a —0.3398
So we get this picture:
0.5 4 Second Order Equations 71 Now call in the number cruncher to solve v 255
16 2.0039et/16 cos ( t — 0.0629) = —0.5
with test = t7 2: 22.097 to get T m 22.16968423 and :r:(T) % —0.49999999.
A similar problem is the following: 2. Find a second order linear differential equation and initial conditions whose solution
_is a:(t) = 2e‘t/20 sin t. Then ﬁnd an accurate estimate of the smallest value of T for which
x(t)l S 0.1 for t 2 T. Discussion: The second half of the problem is like the previous one and T % 58.49217.
But it is surprising how many students have trouble with the ﬁrst half and go to
unreasonable efforts of calculation. From the solution we infer that the roots of the
characteristic polynomial are A1; = — 210 it and since the characteristic polynomial can
be written as A2 — (A1 + A2» + A,» we get the ODE it + 1—10a‘t + («+0 + 1) a: = 0, and
from the solution that 113(0) 2 0, = 2. 8 The Nonhomoge‘neous Equation or
the Variation of Parameters Formula Untangled We consider the nonhomogeneous, second order, constant coefﬁcient differential equation
(*) :2 + (111': + bar = q(t),
where q(t) is continuous on 1' < t < s, and consequently solutions of the IV P 515 + adv + bx = q(t), x(t0) = x0, :ic(t0) 2 yo, will exist, be unique, and be deﬁned for 1' < t < 3, any 1' < to < s, and —00 < (to,
yo < 00. The goal is to ﬁnd a particular solution xp(t) of (*), involving no arbitrary
constants. Its general solution will then be of the form a:(t) = ¢(t) + (Up (t), where ¢(t)
is the general solution of the homogeneous equation it + aa’: + bx = 0. Since we know a fundamental pair of solutions x1(t), x2(t) of the homogeneous
equation exists, we can write x(t) = c1x1(t) + @1132 (t) + $p(t), then solve for c1 and Cg if we are given initial conditions. In some texts, the homogeneous
equation is renamed and becomes the complementary equation, and ¢(t) is called the
complementary solution. This is old fashioned usage and merely adds to the vocabular
baggage. In the case where q(t) is itself the solution of some constant coefficient, possibly higher
order, linear ODE, one can use the Method of Undetermined Coefﬁcients, or Comparison
of Coefﬁcients (or Judicious Guessing). The method is sometimes guided by half a page
of rules like 72 Ordinary Differential Equations—A Brief Eclectic Tour If q(t) ——— P(t) cos wt + Q(t) sinwt, where P(t) and Q(t) are polynomials, let
k be the larger of the degrees of P and Q. Then a trial particular solution is of
the form :L'(t) = A(t) cos wt + B(t) sin wt, where A(t) and B(t) are polynomials of degree k, with coefﬁcients to be
determined by comparison. In the case where a = 0 and b = w2 the trial
particular solution is rcp(t) : tA(t) cos wt + tB(t) sinwt. This may be followed by a detailed table with a column of q(t)’s and a parallel column
of suggested mp(t)’s. Or the whole method might be justiﬁed by the Annihilator Method
which gives rules on how to “annihilate” q(t)—good material for a mathematical monster
movie—“Call the National Guard—the Annihilator has escaped!.” Included with this will be the Principle of Superposition which simply says if mp; (t)
is a particular solution when q(t) = qi (t), i = 1, 2, . .. ,k, then mpi(t) is a particular
solution when q(t) = q1(t). This is a pretty simple consequence of linearity, but
its formidable monicker adds more verbal clutter. Following the philosophy that solving
equations is not the leitmotif of an introductory course, do a few examples to get a feeling
of why the method works and move on. We come now to the variation of parameters method, which was introduced in
Chapter 2 to solve ﬁrst order equations, and as mentioned, the approach taken there
goes right over to first order systems and consequently, higher order equations. We want
to take that approach because it obviates one major stumbling block present in almost
all discussions of the method applied to second order linear equations. We will brieﬂy describe that obstacle. For the first order equation ri = a(t):L' + b(t), we assumed that a particular solution
was of the form rcp(t) = m(t)u(t) where :L'(t) is a solution of the homogeneous equation
and u(t) is an unknown function. Substitutesolveetc. Since the second order equation
(where a and b can be functions of t) has a fundamental pair of solutions :L'l(t), m2(t),
to solve the inhomogeneous equation it + a(t):i' + b(t):L' = q(t)
it makes sense to assume a particular solution of the form
:L'p(t) = (1:1 (t)u1 (t) + 1132 (t)U2 Now substitute and try to solve for ul (t) and u2 (t), and here is where the problem arises.
In computing :tp(t) one obtains after regrouping dip = (211111 + 11.:2111 ‘i— ($1111 + flig’llg) and to successfully complete the argument one must “impose” the condition that the
parenthetical expression equals zero: $1(t)111 (t) + $2(t)’ll2(t) = 4 Second Order Equations 73 The arguments found in most textbooks to justify this are pretty hazy: some examples
(paraphrased) are:
We might expect many possible choices of ul and uz to meet our need.
We may be able to impose a second condition of our own choice to make the
computation more efﬁcient.
To avoid the appearance of second derivatives (of the ui) we are free to impose an
additional condition of our own choice. (Okay, I decided that I want $1111 + $2112 = 13.4079—now what?). The authors should
not be faulted for using a little legerdemain, since they are partly victims of the traditional
path they have followed to get to this point. A much more enlightening approach is to mimic the variation of parameters derivation
used previously for ﬁrst order equations. Recall that for the equation :t = a(t)m +
' b(t) one assumes a particular solution of the form mp(t) = <I>(t)u(t), where <I>(t) = exp [ft a(s) d3]. Then it is substituted into the ODE to obtain a ﬁrst order differential
equation, which is solved to get mp(t) = <I>(t) /<I>_1(s)b(s) ds.
But we can apply the same idea to the nonhomogeneous system
It = (16):!3 + b(t)y + 1105), 2) = C(t)m + d(t)y + 110*)
if we ﬁrst rewrite it as
. _ it _ m _ a(t) b(t) m p(t)
E ‘ ‘ AW + 5“) ‘ (ea) do) + (no) then determine what is <I>(t), a fundamental matrix. Our previous discussion has produced
the solution of the homogeneous system w = (:83) = (:13 3:83) (:3) where co].(:r1(t), m2(t)), col(y1 (t), y; are a pair of fundamental solutions and there’s
our <I>(t)! Furthermore it is invertible since it is nonsingular and we can always choose m1(t), 32(t) so that <I>(t0) = I.
Now just mimic what was done before, ﬁrst noting that <I>(t) satisﬁes the matrix ODE a = (:2: 33:) = (:83 2E3) (:2; 3:) = AW The variation of parameters assumption is that the particular solution mp(t) can be
expressed as mp(t) = <I>(t)u(t), so substitute: 3%,, = dm + in; = A(t)<I>u + B(t).
Since d) = A(t)<I>, the ﬁrst term in each expression cancels and we get in; = 130:) : g = <I>‘1(t)§(t) : got) 2 f <1>1(s)1§(s)ds N 74 Ordinary Diﬂerential Equations—A Briet Eclectic Tour hence
gao=mo/¢*ewem& If we are solving the IVP, we could put limits on the integral and use ftto, then 21,050) = 2,
so the particular solution plays no role in the evaluation of the necessary constants in the general solution.
But we were sticking to n = 2 weren’t we? Yes, and that makes life even easier since
inverting 2 x 2 matrices is a pieceof cake. Recalling that 961(15) y1(t) _
d“QM)wm)‘Wm¢m we have 4 _xw)mo“_ 1 2w ww)
Q W‘QM)QJ ‘WE(iM)mm> and now we can write down the expression for the particular solution in all its splendor—
but we won’t. The topic of this chapter is second order equations, and we don’t want to
stray too far from the terra ﬁrrna of n = 2. So consider the special case A(t) = (4305) I 1110») ’ 1505) = ((125)) which corresponds to the second order equation
5'15 + a(t):i: + b(t)a: = q(t). In this case given a fundamental pair of solutions x1(t), :52 (t), let _ $105) $205) —1 __1_ 56205) —$2(t)
¢m_Qﬁ)MJ’Q W‘Wmimm mm) and W(t) = exp [— ft a(s) (13], therefore W) = (2&3) = (:18; :13) /‘ as) (3:52) 2:25)) (42)) But we are really only interested in the expression for xp(t), so after a little matrix
multiplication we pick out the ﬁrst component of 55,, (t) and get This is exactly what we would get if we solved the system of equations for col(111, 112),
(1)1111 + (32112 = 0, {$1111 + {$2112 = q(t), then integrated. Note these contain the “mysterious” ﬁrst equation which never appears
in the systems approach. 4 Second Order Equations 75 For the constant coefﬁcient case we will derive an alternate, and often more useful,
form of the particular solution, but we will not expand the discussion to higher order
equations or to systems. Just keep the encapsulating statement in mind: The general solution of the system is = A(t)x + B (t) is given by the expression 50:) = ems + <1>(t) / <1>1(s)§(s) ds where is any fundamental matrix of the homogeneous system it = A(t)x
and c is determined by initial conditions, if any. Note that the expression for xp(t) is independent of the choice of a fundamental matrix
<I’(t), since if @105) = <D(t)Q, where detQ 76 0 then ©1(t)/ ¢;1(s)1~3(s)ds = <I>(t)Q/ Q‘1‘I’_1(s)]~3(s)ds = <1>(t) / <1>—1(s)1~3(s)ds. Example. i+ﬁgx = q(t), t aé 0, an Euler equation with m1(t) = 151/2, m2(t) = 151/2 log t,
and since a(t) = 0, (t) = 1. 131/2 t1/2logt
CM) _ (rm/2 15—1/2 + %t‘1/210gt) SO _1 t—1/2 + %t—1/2 —t1/2 Q = _t_1/2/2 t1/2 and therefore xp(t) _ tl/2 tl/zlogt
:tp(t) “ t‘1/2/2 t—1/2+%t1/210gt h/t 31/2+%s'1/2logs —31/210gs 0 d5
_3——1/2/2 51/2 (1(3) from which we obtain the expression t
m (t) = —t1/2 (51/2 logs q 5 ds +t1/2logt 51/2q 5 ds.
17 The example points out the real problem with variation of parameters—any exercises
are not much more than tests of integration skill, e.g. let q(t) = t3/2 above. More
likely, the answers can only be computed numerically, e.g. let q(t) = cos t above, or even
simpler, solve i + x = H”. The importance of the variation of parameters method is
that, for the system, it gives an exact integral representation of the solution, which can
be very useful in determining bounds on solutions, long time behavior, and stability if we
have information about <P(t) and B(t). We will see examples of this in Chapter 5. The best advice, whatever method or approach one uses to introduce variation of
parameters, is to work a few problems then stick a bookmark in that section of the book
in case you ever need to use the formula. 75 Ordinary Ditterential Equations—A Briet Eclectic Tour 9 Cosines and Convolutions The most interesting problems for the second order, constant coefﬁcient equation are those
generally described as the forced (damped or undamped) oscillator. A typical mechanical
conﬁguration is a spring/mass system being driven by a periodic forcing term: —:k:16 / Fcosqt
. 2n?
///// ///////7 For simplicity we will assume that the natural frequency (when k and F are zero) of the
system is w = 1, so the equation describing the system is fri+ksi3+x=Fcosqt, k>0.
Solving the equation gives a solution of the form
a:(t) = xu(t) + Kcos(qt — ¢) where xu(t) is the solution of the unforced system (F = 0)
K cos(qt — ¢) is the particular solution. Since k > 0 we know xu(t) —) 0 as t —) 00; it is called the transient solution. Hence
a:(t) —> K cos(qt — o), the steady state solution. But all is not as simple as it appears,
so we need to look at some cases. Case 1: damping: k > 0, and q = 1.
The frequency of the forcing term matches the natural frequency. Solving the equation
we have a:(t) = xu(t) + %c05(t — 7r/2) = xu(t) + gsint, so the solution approaches a natural mode of the undamped system 513 + a: = 0. Note that
the amplitude‘F/k can be very large if k > 0 is small. Case 2: damping: k > 0, and q 7E 1.
A little calculation results in the steady state solution
1 93(15): FK(¢1)C05(qt — ¢), KM) = W, and K (q), which multiplies F, the amplitude of the forcing term, is called the ampliﬁ
cation factor or gain.
It is a simple ﬁrst semester calculus exercise to show that i) K(0)=1,K(q)—>0asq—>oo. 4 Second Order Equations 77 ii) K’ (q) = 0 for Q2 = 1 — %k2, which will be positive when k < \/_2_, and in this
case it will have a maximum (In/1  k2/4)_1. Otherwise (k > ﬂ) and K(q)
monotonically decreases to zero. iii) If k = 0, K (q) will have a vertical asymptote at q : 1.
A graph of the gain vs. q looks like If we ﬁx k at a value 1 < k < \/§ we can “tune” the system to get maximum amplitude
by adjusting q, the forcing frequency. A good practical problem could be: let F = 10
then plot K (q) for various values of k, and compute the maximum gain, if any, and the
phase shift in each case. The Undamped Cases: I: = 0'
The equation is :2 + x = Fcos qt and if we assume the system starts at rest so 113(0) = :i:(0) = 0, there are two cases
Case 1: q # 1
The solution is m(t) = Tgﬁcos qt — cos t) a' superposition of two harmonics, with a
large amplitude if q is close to 1. Via a trigonometric identity for the difference of two
cosines we get 2F
1 — (12
and suppose q is close to 1, say q = 1 + 6, 6 > 0. Then w(t) = 262562) sin at] sin(2 + e)t z(t) = (sin(q + 1)t)(sin(1 — q)t) and the bracketed term will have a large amplitude and a large period 27r / 6. The other
term will have a period 2—243 ~ 7r, so we see the phenomenon of beats, where the large
amplitude, large period oscillation is an envelope which encloses the smaller amplitude, smaller period oscillation. Case 2: q = 1
Instead of using undetermined coefﬁcients, or variation of parameters for the intrepid
souls, to solve this case, go back to the frrst expression for the solution in the case q 76 1, 78 Ordinary Diﬂerential Equations—A Brief Eclectic Tour and let q = 1 + e. We have a:(t) = 1_—(fY+—€)2[cos(1 + e)t — cos t] and with a little trigonometry and some readjustment this becomes —F
2+6 we (t) = cos d — 1 sin 6t] [(cost)— — (sin t) 6 6 Now use L’HOpital’s Rule letting ‘6 —> 0 to get 21—13(1) 1:5(t) = a:(t) = —2ts1nt, which is the dreaded case of resonance. For the more general case 51': + (4129: = Fcos wt we get a:(t) = £tsinwt. A sample
problem might be one similar to those suggested for the underdamped system: A system governed by the IVP 51': + 41: = % cos 2t, 93(0) = = O, “blows up” when the amplitude a:(t) exceeds 24. When does this ﬁrst occur? A more general discussion of resonance can be obtained for the constant coefﬁcient
case by slightly adjusting the form of the variation of parameters formula, or using the
following result: The solution a:(t) of the IVP
51': + ad: + ba: = q(t), 93(0) = :‘v(0) = 0 is given by the convolution integral mt) = at — s>q(s>ds where ¢(t) is the solution of the homogeneous differential equation satisfying M) = 0, a0) =1. 4 Second Order Equations 79 To prove the result, ﬁrst compute the derivatives of xp(t): @(t) = ¢(t  040*) + [0 <50? — s)q(s) d8 = /0 9230?  s)q(s) d8 since ¢(0) = 0, t t .. 
int) = at — t)q(t) +1) at — s)q<s) ds = q(t) +/0 at — s)q(s) ds since ¢<o> = 1.
Then
t u .
5b,, + asp + bxp = q(t) + /0 (¢ + a¢ + b¢)(t — s)q(s) ds = q(t)
since ¢(t) is a solution of the homogeneous equation. A more elaborate proof would be to use the variation of parameters formula for each of the constant coefﬁcient cases and combine terms.
The convolution integral representation has several very nice features: a. Since mp(0) = ai'p(0) = 0, the particular solution contributes nothing to the initial
conditions, which makes the IVP it + ai‘ + bx = q(t), m(0) = :80, 2 yo easier to solve. One merely adds to mp(t) the solution of the homogeneous problem
satisfying the IC. b. Similar to the case for the ﬁrst order equation it gives an elegant representation of
the solution when q(t) is a discontinuous function. —1, 0932 _ t
1, 2 < t . Then ¢(t) .— te and Example. :2 — 2% + m = { mp(t) = [Uta — s)et—s(—1)ds, 0 S t S 2, 2 t
mpg) = f (t  s)et_s(—1) d3 + / (t — s)et_‘(1)ds, t > 2.
0 2
Working out the integrals gives
mpg) = —tet +et — 1, 09 S2,
xp(t) = tat—2 — Bet—2 —— 62, 2 < t. But the real value of the convolution integral representation is that it gives a very
speciﬁc criteria for when resonance can occur in the general case. Consider the IVP 31" + m = f0), {5(0) = $0» “0) = yo,
where f(t + 277) = f(t). In this case ¢(t) = sint so m(t) = no cost + yo sint + mp(t), mp(t) = [0 sin(t — s)f(s) d3. 80 Ordinary Differential Equations—A Brief Eclectic Tour Then
t+21r
mp(t + 21r) =/ sin(t — s)f(s) ds
0 t t+21r
= / sin(t — s)f(s) ds +/ sin(t — s)f(s) ds
0 t and by making a change of variables in the second integral and since f (t + 21r) = f (t)
we get 21r
mp(t + 21r) = :1:,,(t) +/ sin(t — s)f(s) ds.
0 It follows from the last equation that
21r
23,,(t + 4n) = 23,,(t + 21r) +/ sin(t + 21r — s)f(s) ds
0 21r
= 23,,(t) + 2/0 sin(t — s)f(s) ds, and in general for any positive integer N
21r
23,,(t + 2N1r) = 23,,(t) + N/ sin(t — s)f(s) ds.
0 But if there is some value t = a having the property that [02" sin(a —s) f (.9) ds = m 95 0,
then mp(a + 2N7r) = :1:,,(a) + Nm which becomes unbounded as N —> oo. Resonance!
We conclude that The equation :‘1': + :1: = f (t), f (t + 271') = f (t), will have a 27r—periodic solution
whenever f021r sin(t — s) f (3) d3 = 0 for all t. Otherwise, there will be resonance. The result allows us to consider a variety of functions f (t) satisfying f (t + 271') = f (t),
for example a) f(t) = sint, then 21r 1r
/ (sin(t — s)) sin s ds = / sin(t — s) sinsds
0 0 21r
+/ sin(t — s)(— sins) ds = 0, so no resonance 4 Second Order Equations 81 —1 0<t<7r 1 7r < t < 2” penodlcally extended, then h) N) = { /027r sin(t — s)f(s) ds = — /07r sin(t — 3) ds 21r
+/ sin(t — 3) ds = 4cost
1r SO resonance occurs. The above result is a speciﬁc instance of a more general result for constant coefﬁcient
systems a': = Arc. It states that if <I>(t) is the fundamental matrix satisfying <I>(0) = I, the identity matrix, then <I>(t — s) = <I>(t)<I>‘1(s) for all t and 3. Note furthermore that f5” sin(t — s) f (3) ds = 0 for all t is equivalent to 21r 21r
sint/ coss f(s) ds — cost/ sins f(s) d3 = 0, for all t.
o o In particular, this would be true for t = 7r / 2 and t = 7r which implies that resonance will
not occur if the orthogonality relations 21r 21r
/ cossf(s)ds=0, / sinsf(s)ds=0
0 0 are satisﬁed. 10 Second Thoughts There are some other facets of second order, linear, differential equations which could be
part of the introductory course, and they merit some comment. Numerical Methods For second order equations numerical methods have no real place in a ﬁrst course,
assuming that the usual (hopefully not too long) introduction to the topic was given when
ﬁrst order equations were discussed. What should be made clear is that despite what the
computer screen might show, the numerical schemes convert the second order equation
to a ﬁrst order system—remember they can only chase derivatives. For example, to use
Euler’s Method for the general IVP :13 = f(t,a:,a':), $(a) = 1:0, a':(a) = go,
it is ﬁrst converted to a ﬁrst order system
i: = y: : f(ta$ay)v $01) = $0, 71(0): yo Then the algorithm to approximate the values a:(b), y(b) = :i:(b) is: 82 Ordinary Differential Equations—A Brief Eclectic Tour Euler’s Method t0=a
930:930
y0=yo fornfromOtoN—ldo
tn+1 : tn + h
93,,“ = 93,, + hyn yn+1 = yn + hf(tna$na yn) Print tNaTNsyN _ b—a
where h — —N . Once this is explained, leave the numerics to the little gnome and enjoy his sagacity. Boundary Value Problems Unless the presentation is intended as preparation for
a course in partial differential equations, and consequently the emphasis is on linear
differential equations, there will be little time for boundary value problems. The Stunn
Liouville problem, selfadjointness, eigenfunction expansions/Fourier series etc. are topics far too rich to cover lightly.
But this does not mean the subject should not be mentioned, if for no other reason to point out the big differences between the initial value problem and the boundary value
problem. The big distinction is of course that we are requiring the solution to satisfy end
point conditions at two distinct points as opposed to one initial point. A simple example
points this out: Given the equation y” + y = 0, where y = y(a:), how many solutions does it
have for the following boundary conditions? (i) y(0) = 1, y = 1: since y(a:) = Asinm+B cosa: is the general solution,
then we must solve
y(0) = Asin(0) + Bcos(0) = 1 7" 7" =Asin(§) +BCOS(7r/2) : 1}=>A=B= 1, so y(a:) = sins: + cosa: is the unique solution.
(ii) y(0) = 1,y(7r) = 1: we obtain y(0) = Asin(0) + Bcos(0) = B = 1,
y(7r) = Asin(7r) + Bcos(7r) = —B = 1, a contradiction, so no solution exists. 4 Second Order Equations 83 (iii) y(0) = y(27r) = 1: we obtain Asin(0) + Bcos(0) : 1 ASin(27T) + Bcos(27r) = 1 } 2} B = 1’ A arbltrary so we have an inﬁnite number of solutions = A sinm + cos 1‘. Note: for boundary value problems, the independent variable is usually a: (space) as opposed to it (time).
Another problem worth mentioning is the eigenvalue problem; an example is this:
Given the boundary value problem 24” + Ay = 0, y(0) = 11(7) = 0, for what values of the parameter A will it have nontrivial solutions? The example indicates
that the question of existence and uniqueness of solutions of a boundary value problem
is a thorny one. i) A < 0: Letting A = —7'2,r > 0, the general solution is = Ae—m + Be”.
Then y(0) = 0 implies A + B = 0, and y(7r) : 0 gives
y(7r) = Ae‘r7r + (—A)e"r = 0. If A = 0 we have the trivial solution y(a:) E 0, otherwise A(e_“r — e”) = 0 gives
1 — 62” = 0, which is impossible. No solutions. ii) A = 0: then y’ ’ = 0 or = A1: + B and the boundary conditions imply that
A = B = 0. iii) A > 0: let A = r2, 1' > 0, then = Asinm: + BCOS’I‘.’L'; y(0) = 0 => B = 0, and y(7r) = sinrvr = 0 implies A = 1,4,... ,n2,..., the eigenval
ues of the differential equation with associated eigenﬁmctions sin :1}, sin 21¢... ,
sin nx, . .. . This is a good point to stop, otherwise one is willy nilly drawn into the fascinating topic
of Fourier series. Remark. If you are driven to do some numerical analysis, taking advantage of today’s
accessible computer power, consider doing some numerical approximations of boundary
value problems using the shooting method. It has a nice geometric rationale, and one can
use bisection or the secant method at the other end to get approximations of the slope.
The book by Burden and Faires has a very good discussion. Some sample problems: i) y” + ezxy’ + y = 0, 74(0) = 0711(1) = 05
(initial guess y’(0) = 1.2)
ii) y” + Zy’ + my 2 sinx,y(0) = 0,y(1) = 2 (initial guess y’(0) = 4.0) 84 Ordinary Diﬂerential Equations—A Brief Eclectic Tour iii) y” — 732,5 = 0,y(0) = 2/3,y(7/4) = 1/2 (initial guess y’ (0) = —O.15) Stability This is a good spot to brieﬂy mention the stability of the solution x(t) E O of the
constant coefﬁcient equation 51} + adv + bx = 0. Going back to the mass spring system
we see that if :c(t) E a':(t) E O, the system is at rest—this is an equilibrium point. From
the nature of solutions we see that a. If the roots A of the characteristic polynomial have negative real parts then every
solution approaches 0 as t —> 00. The cases are A1, A2 < O: x(t) = cleklt + Cgekzt
A < 0 double root: :c(t) = 016M + Cgtekt A = 7‘ j: ia, 7‘ < 0: :c(t) = Aert cos(at — (p). In each case we see that for any 6 > 0 if — 0 is sufﬁciently small for t = t1 then
:c(t) — 0 < e for all t > t1, and furthermore tlim = 0. We conclude that
—>oo If the roots of the characteristic polynomial have negative real parts then the
solution a:(t) E 0 is asymptotically stable. b. If the roots of the roots A of the characteristic polynomial are purely imaginary
A = iia, then the solution is :c(t) = Acos(at — (,0). We see that if A, which is
always positive, satisﬁes A < 6, then — 0 :2 Acos(at — <p) g A < 6, so as
before, if x(t) — 0 is sufﬁciently small for t = t1 then :c(t) — 0 < e for all t > t1,
but aé 0. We conclude that If the roots of the characteristic polynomial have zero real parts then the solution
:c(t) E 0 is stable. From the argument above we see that for a solution to be asymptotically stable it must
ﬁrst be stable—an important point that is sometimes overlooked. c. Finally in the case where one or both of the roots of the characteristic polynomial
have a positive real part we say that the solution :1:(t) E O is unstable. We can use a default deﬁnition that not stable means unstable, and clearly in the cases
A1,A2 > 0, double root A > 0, or A = a + iﬂ, a > O, we see that any solution
:c(t) becomes unbounded as t —> 00 no matter how small its amplitude. What about
the case A1 < O < A2? While there is a family of solutions cleklt which approach
zero as t —> co, the remainder are of the form C2€A2t or c1 eklt + 028A2t which become
unbounded. Given any 6 > O we can always ﬁnd solutions satisfying x(0) < e where
23(0) = C2 or x(0) = 01 + 02 and c2 aé 0. Such solutions become unbounded as t —> 00
so the homespun stability criterion——once close always close—is not satisﬁed. 4 Second Order Equations 85 It is important to go through the cases above because they are an important precursor
to the discussion of stability of equilibria in the phase plane of a linear or almost linear
twodimensional autonomous systems. In fact, they serve as excellent and easily analyzed
models of the various phase plane portraits (node, spiral, saddle, center) of constant
coefficient systems. This will be demonstrated in the next chapter. Inﬁnite Series Solutions Perhaps the author’s opinion about power series solutions is best summarized by a
footnote found in‘Carrier and Pearson’s succinct little book on ordinary differential
equations, at the start of a short ten page chapter on power series. The Chapter 11
referred to is an eleven page chapter with some of the important identities and asymptotic
and integral representations of the more common special functions—Error, Bessel, Airy,
and Legendre. *For example, the trigonometric function, sin av, is an elementary function from
the reader’s point of view because he recalls that sinm is that odd, oscillatory,
smooth function for which lsinm] S 1, sinm = sin(m + 27r) and for which
meticulous numerical information can be found in easily accessible tables; the
Bessel function, Jo(a:), on the other hand, probably won’t be an elementary
function to that same reader until he has thoroughly digested Chapter 11. Then,
Jo(z) will be that familiar, even, oscillatory, smooth function which, when
an >> 1, is closely approximated by ‘/2/7r:v cos(m — 7r/4), which obeys the
differential equation, (xu’)’ + xu : 0, for which Jo(0) = 1, and for which
meticulous numerical information can be found in easily accessible tables. The statement was made in 1968 when practicing mathematicians used tables (remem
ber tables?), long before the creation of the computing power we enjoy today. So why
do we need to have books with 30—50 page chapters of detailed, boring constructions
of power series for all the various cases—ordinary point, regular singular point, regular
singular point—logarithmic case—followed by another lengthy presentation on the power
series representations of the various special functions?1 Unless you intend to become
a specialist in approximation theory or special functions, you are not likely to have to
compute a series solution for a nonautonomous second order differential equation in your
entire life. Somebody long ago, maybe Whittaker and Watson, decided that the boot camp of
ordinary differential equations was series solutions and special functions, including the
Hydra of them all, the hypergeometric function. This makes no sense today and the
entire subject should be boiled down to its essence, which is to be familiar with how to
construct a series solution, and a recognition of those special functions which come up
in partial differential equations and applied mathematics via eigenfunction expansions. The point made in the quote above is an important one, namely that there are differential
equations whose explicit solutions are special functions, whose various properties such 1 The author labored long and hard, seeking enlightenment, writing such a chapter in a textbook some time
ago. Whatever nascent interest he had in the topic was quickly throttled. 86 , Ordinary Differential Equations—A Briet Eclectic Tour as orthogonality and oscillatory behavior are well known, and whose representations are
convergent power series with known coefficients. For instance: The Bessel function of order 2
a: 2 0° (—1)" a: '1
ha) _ ; n!(n + 2)! is a solution of the Bessel Equation of order 2, +
dm2 :1: da: 9:2
The function J2(:c) is an oscillatory function with an inﬁnite number of zeros
on the positive xaxis, and J2 (9:) —> 0 as a: —> 00. It is well tabulated. Hence, we can certainly say that we can solve the Bessel equation of order 2, although the
solution is not in terms of polynomials or elementary functions. Power series solutions
do give us a broader perspective on what is the notion of the solution of an ODE. Introduce the topic with the construction of the ﬁrst few terms of the series for Ai(m)
or Jo(a:), then develop the recurrence relation for their terms. For the same reason you
should not use Euler’s Method to approximate the solutions of :i: = :13, do not develop the
series expansion for the solutions of ii + as = 0; it gives the subject a bad reputation. Next, if you wish, discuss brieﬂy some of the special functions, their differential
equations, and possibly some of the orthogonality relations, but this is already wandering
far afield. Then move on, remembering that if someone wants the graph of J7 /2(a:) or
its value at a; = 11.054 a computer package or even some hand calculators will provide
it—but who would ask such a question? Something worth considering instead of power series expansions is perturbation
expansions; these are of great importance in applied mathematics and in the analysis of
periodic solutions of nonlinear ODEs. See for instance, the Poincaré—Lindstedt method
in more advanced texts. First order equations can provide some nice examples as the following shows: Consider the logistic equation with periodic harvesting
1
:i: = 1—0a:(1 — m/40) — + esint).
Then 7‘ = 11—0, K = 40 so rK / 4 = 1, the critical harvesting level, so for existence
of periodic solutions we require that [2 + esint[ < 1 or [6 < i. We wish to approximate the stable periodic solution with a power series in 6 having periodic
coefﬁcients, so let x(t) = 37005) + 6$1(t)+ 622220) + . ~ ,mt) = mt + 2}). Substitute and equate like powers of e: 0.  _ 1 L 2 _ § ‘ ' ' '
6 . x0 — ﬁx — 400% 4. The only per10d1c solutlons w1ll be constant ones :50 (t) E 10,30. These correspond to the new equilibria under constant rate harvesting H = %. To perturb around the stable 4 Second Order Equations , 87 one let :30 (t) E 30. l. . _ _ . ____1_ _ . . .
E . 1:1 — (10 200%) m1 smt— 20:31 s1nt. The solution 1s 20
= —t/20 _ '
m1(t) Ale + 401(2()cost s1nt) 20
v 40 1 Since we want periodicity let A1 = 0, and note that the exponentially
decaying term supports the fact that m(t) is stable. 2. ‘=(__1__;) __1_ 2__; __1_ 2t
5 ' $2 10 200‘B0 1’2 400$1 ” 20m? 401 COS ( + ‘15),
which has a periodic solution (left to the reader). = Ale—V20 + cos(t + ab), (15 = tan—1(1/20). We conclude that
206 v 401 These kinds of expansions are very much in the spirit of the qualitative theory of ODES. m(t) z 30 + cos(t + ¢) + 0(52). Linear and Nonlinear Systems This ﬁnal chapter will not be a lengthy one, despite the importance of the topic. The
reason is a refreshing one: many of today’s textbooks are bulging with nonlinear systems
and the models describing them. This is largely the result of the great interest today in
dynamical systems, and the availability of the computing power to be able to display and
approximate solutions. While the author may question some of the topics chosen or the
level of sophistication, it is all done in the modern spirit of qualitative understanding, and that merits approbation. 1 Constant Coetticient Linear Systems In the previous chapter on second order equations, the linear systems approach, it was
shown that for the twodimensional system {B = a(t):v + b(t)y, g) = c(t)w + d(t)y the solution is given by :30?) = <I>(t)c where 35“) = ‘1’“) = (:38 3:8) with col(;v1(t), yl (15)), col(;v2(t), y; (15)) a fundamental pair of solutions, and c a constant vector determined by the initial conditions, if any. This result generalizes to higher
dimensions; it is largely of theoretical interest since finding fundamental sets of solutions will be nearly impossible to accomplish.
For the constant coefﬁcient case IbZAIE, A=(a,j), i,j=1,...,n, 89 90 Ordinary Differential Equations—A Brief Eeleetie Tour one needs to know that if A is an eigenvalue of A then a: = 6M7] is a solution, where n is an eigenvector corresponding to A. A one line proof sufﬁces: 9': = AeAtn = AeAtn <=> eAt(A — IA)n = 0. But where to go from here to get <I>(t)? There are lots of perilous paths, many requiring a
real knowledge of linear algebra (Jordan canonical form, companion matrices, etc.), and
the reason is simply because A might have eigenvalues of multiplicity larger than one. Spending a lot of time on what is basically a linear algebra problem seems a waste of
time since  (i) We know a fundamental set of solutions exists—let zi(t) be the unique solutions of
the IVP a’: 2 A93, 93(0) = 6,, the ith unit vector, etc. (ii) We can do a little hand waving and state the plausible result If the n X n matrix A has n distinct eigenvalues A, with associated
eigenvectors 77,, 2’ = 1,2,... ,7; then the set = {eAitm} is a fundamental set of solutions of the system 9': 2 A93. (iii) We can quickly take care of the case n = 2 completely, and it gives a hint of what
the problem will be for higher dimensions when multiplicities occur. (iv) The subject is ordinary differential equations not linear algebra. Recently, some textbooks have appeared doing a lot of linear algebra for the case n = 3 and then discussing three dimensional systems. The impetus for this comes from the availability of better 3D graphics, and because of the great interest in the three dimensional systems exhibiting chaotic behavior—the Lorenz equations modeling climate change, for instance. Unfortunately, twodimensional systems cannot have chaotic structures, but that isn’t a good reason to spend excessive time analyzing 3 x 3 matrices.
Returning to the cozy n = 2 world: consider s=<:i:>=(: :><:>=Az If A1,A2 are distinct eigenvalues with associated eigenvectors 711,712, then and we can state: 931 t = 6A” 1, 932 t = 6"” 2 are a fundamental air of solutions and every
N n N n p solution can be written as
f“) = 01f1(t)+ 02905) for appropriate choices of the constants 01,02. What about the case of complex conjugate eigenvalues A = 'r‘ + iq, A = 'r‘ — iq, q aé 0?
For any real matrix A, if A is a complex eigenvalue with associated complex eigenvector
n then since fl = A, ‘ (A—ADn=0®(A—ADﬁ:m 5 Linear and Nonlinear Systems 91 so 17 is an eigenvector associated with X, and the statement above applies. But we want real solutions, so go back to a variation of the technique used for the
second order equation whose characteristic polynomial had complex roots. If /\ = 1" + iq
and 17 = (a + 2'5, '7 + i6) then any solution can be expressed as = rt iqt ——iqt a_iﬂ
£(t) e [ole (7+i6>+cze (7_2.6 . Now let c1 = c2 = %, then cl = %, Cg = —§1;, and express cos qt, sinqt in terms of complex exponentials to get two real solutions it (t) 2 e“ acos qt — ﬂsinqt
~1 '7 cos qt — 65in qt g2 (t) = e”( and there’s our fundamental pair.
The case of /\ a double root is vexing when we can only ﬁnd one eigenvector 17. The asinqt + Boos qt
75inqt +6sinqt ’ linear algebra approach is to ﬁnd a second nonzero vector a satisfying (A — AI); = 17. Then a fundamental pair of solutions is
x1(t) = 3M0, 1:20) : eAt(U + ﬁt), and this must be proved by showing a is not a multiple of 17. We will do this shortly. But a differential equations approach is more intuitive, suggested by the strategy used
in the second order case of multiplying by t. Try a solution of the form te’Vn—it doesn’t work. So try instead that solution plus 06’“ where a is a nonzero vector. Then :c(t) = te"t17 + e’Vcr is assumed to be a solution, and since i = Am, a little calculation
gives N N N N
g : /\te’\t17 + e’“(17 + Ag) = te’uA17 + e’VAz.
But A17 = A17 so the first terms cancel and after cancelling e’“ we are left with
Ag=17+/\cr or (A—AI)g=17 which is exactly what we want.
If a were a multiple of 17, say a = k17, then the equations (A — AI)k17 = 17 would imply k(A — AI)17 = 0 = 17 which is impossible. Now we can prove xlft) arid 152 (t) are a fundamentaleair of solutions by showing we can use them to solve uniquely any
IVP, :2 2 Am, :1:(0) = col(:z:0, yo) with a linear combination £(t) = c1210) + 02g2(t) = cle’vn + cze’v(g + t17).
If 17 = col(u,11),a = col(w, 2) this leads to the Quations clu + ng = x0, C221 + czz = yo 92 Ordinary Differential Equations—A Briel Eclectic Tour which can be solved for any $0,310 since the columns of the coefﬁcient matrix (3 ‘5) are
linearly independent, so its determinant is not zero. All this takes longer to write than it does to teach, but limiting the discussion to n = 2
makes the calculations easy and avoids all the problems associated with dimensionality.
Just consider n = 3 with A an eigenvalue of multiplicity three and the various possibilities;
you might shudder and just leave it to the linear algebra computer package. For the nonhomogeneous system is = A(t)x + B(t) or at 2 Ax + B(t) we have discussed the general variation of parameters formula for n = 2 in the previous
chapter. It generalizes easily to higher dimensions. In the case A(t) = A note that the
variation of parameters formula does not require the computation of <I>_1(s) when <I>(s)
is the fundamental matrix satisfying <I>(0) = I. This is because @(t)<1>'1(s) = <I>(t — s)
which is not hard to prove: Let (2(t) = <I>(t)<I>_1(s) and since <l> = A<I> then = <l>(t)<I>‘1(s) =
A<I>(t)<I>_1(s) = AQ(t). Therefore (2(t) is the solution of the [VP X = AX,
X(s) = I, but so is <I>(t — 5) hence by uniqueness (2(t) = <I>(t — 5). Finally we note that the method of comparison of coefﬁcients can be used to solve
a“ = Ax + B (t) if B (t) has the appropriate form, but it can be a bookkeeping nightmare.
For example to ﬁnd a particular solution for the system 1" _A x V+ t2 t xp(t)=Bt2+Ct+D+Esint+Fcost
y _ y sint ry yp(t)=Gt2+Ht+J+Ksint+Lcost substitute and compare coefﬁcients, and that is assuming sin t is not one of the components of the solutions of z" = Ax. Good luck—if you do the problem three times and get the
same answer twice—it’s correct. 2 The Phase Plane The phase plane is the most valuable tool available to study autonomous twodimensional
systems (*) = = Q(xry)a and it is important to stress that the righthand sides depend only on a: and y—there is
no tdependence. If there is tdependence so that P = P(x,y,t), Q = Q(x,y,t) the
system can be transformed to a 3dimensional autonomous system by introducing a third variable 2 = t. Then we have :i'ZPCanaZ), yZQCiayaz): 17 which sometimes can be useful and introduces 3Dgraphs of solutions. We will pass on
this. For simplicity, we will assume that P(x, y), Q(x, y) are continuous together with their
ﬁrst partial derivatives for all x,y. The very important point is that given a solution 5 Linear and Nonlinear Systems 93 (a:(t), of (uk), then as t varies it describes parametrically a curve in the x,y—plane.
This curve is called a trajectory or orbit for the following simple reason which is an easy
consequence of the chain rule: If (a:(t), y(t)) is a solution of (uk) and c is any real constant then (x(t+c), y(t+c))
is also a solution. But if (a:(t),y(t)) describes a trajectory then (x(t + c),y(t + 0)) describes the same
parametric curve, but with the time shifted, and hence describes the same trajectory.
Quick, an example, Herr Professor! 3': 2 31,3) 2 —4a: (corresponding to if + 4.7: =20). If x(0) = 1, y(0) = 0 then
a:(t) = cos 2t, y(t) = —2 sin 2t so x(t)2 + MEL = 1. The trajectory is an ellipse
traveled counterclockwise and we have the phase plane picture A y This trajectory represents every solzution that satisﬁes the initial conditions
x(to) = .710, y(to) = go, where xg+£41 = 1. For instance, x(t) = cos 2(t—7r/2),
y(t) = —2 sin 2(t — 7r / 2) lies on the ellipse and satisﬁes the initial conditions
a: = 1, y(7r/2) = 0, or the initial conditions a: = %, y = Remark. To determine the direction of travel it is usually easy to let a: > 0 and y > 0
and determine the direction of growth. In the example above if y > 0 then Li: > 0 so the
xcoordinate increases, and if a: > 0 then 3) < 0 so the ycoordinate decreases. The next important point is that trajectories don’t cross. If they did you could
reparametrize by shifting time, and create the situation where two distinct solutions passed
through the same point at the same time. This would violate uniqueness of solutions of
the IV P. Our phase plane is now ﬁlled with nonintersecting trajectories waltzing across it—
“ﬁlled” is right since for every point ($0,110) and any time to we can ﬁnd a solution of
(*) with x(to) = 2:0, y(t0) = 3:0. What else could it contain? Points ($0,110) where
P(a:0, yo) = Q(a:0, yo) = 0 called equilibrium points, or critical points. If we think of
the system (uk) as describing motion then an equilibrium point would be a point where
3': = y = 0 so the system is at rest. 94 Ordinary Differential Equations—A Brief Eclectic Tour What else? It could contain closed curves whose trajectories represent periodic
solutions. For such a solution it must be the case that (m(t+ T), y(t+T = (:z:(t), y(t))
for all t, and some minimal T > 0 called its period. Note that since the equations are
autonomous, the period T is unspeciﬁed so it must be calculated or estimated from the
differential equation. This is often a formidable task when P and Q are nonlinear. Closed trajectories are called cycles and a major part of the research done in differential
equations in the last century was the search for isolated ones or limit cycles. By isolated
is meant there is some narrow band enclosing the closed trajectory which contains no
other closed trajectory. An informal definition would be that for a limit cycle, nearby
solutions could spiral towards it (a stable limit cycle), or spiral away from it (an unstable
limit cycle), or both (a semistable limit cycle). / c /w
K / stable unstable semistable The important point to be made about the phase plane is that the nonintersecting
trajectories, possibly equilibria, cycles, and possibly limit cycles, are all its possible
inhabitants. This is the result of some fairly deep analysis and that the universe being
studied is the myplane. It has the special property that a closed, simple curve (it doesn’t
overlap itself) divides the plane into an “outside” and “inside”, and this restricts the kind
of behavior we can expect from solutions of (*). Nevertheless, the phase plane can be
spectacular in its simplicity. 3 The Linear System and the Phase Plane For the 2dimensional linear system
i=az+by, y=C$+dy, clearly (0,0) is an equilibrium point and to ensure it is the only one we stipulate that
ad — bc 91$ 0. The case where ad — bc 2 0 will be mentioned brieﬂy later. A discussion
of this usually follows some analysis of linear systems, so expositors are eager to use all
the stuff they know about 2 Ag where A = g). This often confuses matters and it is much easier to use the second order equation, . . 0
'z+pm+qz=0, A=( 1),
—q —P
since we know all about its solutions and we can describe every possible phase plane
conﬁguration for the general linear system, except one. We analyze the various cases of roots A1, A2 of the characteristic polynomial p(/\) = A2 + p/\ + q, and the important 5 Linear and Nonlinear Systems 95 point is that the pictures for the full linear system will be qualitatively the same, modulo
a rotation or dilation. 1. —)\1, —)\2 negative and distinct. 02 = 0: (a:(t),y(t)) —> (0,0) as t —> 00, and y(t) = —)\1:c(t); the trajectory is a
straight line going into the origin. c1 = 0: (a:(t),y(t)) —> (0,0) as t —> co, and y(t) = —)\2:c(t); the trajectory is a
straight line going into the origin. Now suppose A2 > A1 then y(t) _ —/\lcl — Azcze_()‘2_)‘1)t Mt) 01 + cze—(Az—Am _’ —/\1 as t —> 00 so all trajectories are asymptotic to the line y = —)\1:13 and we get this picture—the
origin is a stable node which is asymptotically stable. V) ,.
°\ q. y=7~2 x Note that the straight line trajectories do intersect at the origin when t = 00 which does
not contradict the statement that trajectories do not intersect. 2. A1, A2 distinct and positive. The previous picture is the same but the arrows are
reversed—the origin is an unstable node. 3. A1, A2 complex conjugate with negative real part, so A1; = 1' :i: ia, 1' < 0. Use the
phaseamplitude form of the solution: $(t) = Ae‘" cos(at — ¢), = y(t) = —1‘Ae"" cos(at — ¢) — aAe‘” sin(at — ¢),
so clearly (a:(t),y(t)) —> (0,0) as t —> 00. Now do a little manipulation: y(t) + 'r'a:(t) = —Ae_" sin(at — ¢) and therefore the trajectory is represented by the equation +3:2 = A254”, 6. Ordinary Differential Equations—A Brie! Eclectic Tour and expanding y2 + 2mg + ((12 + r2)a:2 = a2A2e‘2", r > 0. In times past, when analytic geometry was not a brief appendix in a calculus text,
one would know that a rotation of axes would convert the lefthand side to an
elliptical form. Since the righthand side is a positive “radius” approaching zero as
t —> oo—the origin is a stable spiral which is asymptotically stable. y A1, A2 complex conjugate with positive real part. The previous picture is the same
but the arrows are reversed—the origin is an unstable spiral. ~/\1 <0</\2. a:(t) = ale—Alit + C26)“ = y(t) = —/\1€16—)‘1t + A2026“: 02 = 0: (a:(t),y(t)) —> (0,0) as t —> 00 and y(t) = —/\1.7:(t)
cl = 0: (a:(t),y(t)) —> (0,0) as t —> —oo and y(t) = A2m(t) We get two straight line trajectories, one entering the origin and the other leaving
and the remainingtrajectories approach them as t —> :lzoo—the origin is an unstable
saddle point. Y
X
0
We have previously discussed the case /\ = :lzz'a.
me) = Acos(at — «5) 2 .112 _ 2 = y(t) = —aA sin(at — (,0) a: + Or2 A 5 Linear and Nonlinear Systems 97 The trajectories are families of ellipses—the origin is a center which is stable, but
not asymptotically stable. Note that the trajectories are cycles since they are closed curves, but not isolated ones so
they are not limit cycles. 7. —A < 0, a double root. a:(t) = ole—At + cﬁe‘” = y(t) = ~—Acle_)‘t — Acﬁe‘” + 026—)‘t = —Aa:(t) + 026—”
Then (a:(t),y(t)) —» (0,0) as t —» 00, and if c2 = 0 the trajectory is a straight line entering the origin. If C2 5A 0 all trajectories are asymptotic to that line—the origin
is a stable node which is asymptotically stable. Y 8. A > 0, a double root. The previous picture is the same but the arrows are reversed—
the origin is an unstable node. There is a degenerate case corresponding to the second order equation it + (12': = 0, so
the eigenvalues are A = 0, A = —a. Then at at a:(t) = Cl + cya— , :i:(t) = y(t) = —acze“ and suppose a > 0. The (a:(t),y(t)) —» (c1,0) as t —» 00 and a:(t) = c1 — %y(t) or
y(t) = —a:c(t) + acl a family of straight lines. The picture is 98 Ordinary Diﬂerential Equations—A Brief Eclectic Tour and it implies every point on the maxis is a critical point. If a < 0 the arrows are reversed.
There are several other degenerate cases corresponding to the case where det A = 0, but
the condition det A aé 0 assures us that (0,0) is the only equilibrium point. Using the second order equation to model the various cases does not pick up one case
found in the full linear system. It corresponds to the system i = gm, 3) = qy => 95(13) = 016‘”, W) = 026‘” so y(t) = ﬁx“), and if q < 0, then ($(t),y(t)) —+ (0,0). The trajectories are straight
lines of arbitrary slope going into the origin; if q > 0 they leave the origin. These are
also called nodes, or more speciﬁcally, proper nodes, and the origin is asymptotically
stable if q < 0 and unstable if q > 0. r<O You will ﬁnd that the effort to characterize the various cases in the general case is not
much more difﬁcult, except that the linear algebra gets in the way. The characteristic
polynomial of A is A2 — (a + d)/\ + (ad — be) which is more complicated, and one must
ﬁnd eigenvectors or do a rotation of axes to pull out the pictures. When one considers the almost linear system or the linear approximation at an
equilibrium point of the full system, one should think of the nonlinearity as blurring
the pictures above, or tweaking the eigenvalues of the matrix. Hence, one can rationalize
that almost always the asymptotically stable or unstable structure will be preserved. It is
the stable case, the center, which can be preserved or transformed into a stable or unstable
spiral under small perturbation. The zero real part of the eigenvalue can stay the same or 5 Linear and Nonlinear Systems 99 shift a little in either direction. Developing this kind of thinking, not rote memorization
of formulas or procedures, is the goal. Afterthought: Just as a topic of academic interest it is very difficult to come up with
an example where solutions satisfy lim (a:(t), y(t)) 2 (11:0, yo) but (11:0, yo) is not stable. t—mo
Solutions must have the property that they get close to (11:0, yo) then veer away, then get closer, then veer away again etc. so that the “once close, always close” property doesn’t
hold. One example in polar coordinates a: = 7‘ cos 0, y = r sin0 is 7‘ = r(1 —r), 9: sin20, and since 7‘ = 1 is an attractor, we see that (r, 6) = (1, 0) is an attractor and t1_i>m r(t) = 1.
00
But (1,0) is unstable in view of the behavior of 0(t) which was mentioned in Section 2.3, and solutions keep leaving any neighborhood of (1,0) temporarily. 4 Competition and PredatorPrey Systems This section is not intended to give a lengthy description of models of rabbits slugging
it out with ﬁeld mice or trying to escape from coyotes. Rather we will give a rationale
for the construction of such models, and mention the interesting case where one of the populations is being harvested. Given X, Y two populations, with 11:05), y(t) their sizes at time t, let :i: / 11:, y/y be their
per capita rate of growth, and assume these are functions of their present size. This leads
to the model governing their growth :i: = xf(a:,y), I) = 149013.14), and we can assume that f(a:,y) and g(a:,y) are smooth functions for a: 2 0, y 2 0
(population sizes aren’t negativel). This simple assumption makes it much easier to
classify models, and do some preliminary analysis before specifying the nature of f and 9.
Now we can distinguish between two models: Competition Models: If y increases then the per capita rate of growth of X decreases. Similarly, if :1:
increases then the per capita rate of growth of Y decreases. PredatorPrey Models: Let Y be a predator population and X a prey population. If :1: increases then the
per capita rate of growth of Y increases, whereas if y increases the per capita
rate of growth of X decreases. Given these assumptions, we will study the stability of any equilibria by examining the
associated linearized system at an equilibrium point. Recall that for a nonlinear system :i: = P(:I:,y), I) = Q($ay)a 100 Ordinary Diﬂerenlial Equations—A Brief Eclectic Tour and equilibrium point (2300,1100), where P(a:oo,yoo) = Q(a:°o,yoo) = 0, the associated
linearized system is 11 2 Au where u = col(u, v), _ Px(x,y) cam)
A ‘ (Pym) Qy<x,y))($w,yw)’ anduzx—xoo,v=y—yoo. Competition Models
In this case f (x,y) should debrease as y increases, and g(x,y) should decrease as 1:
increases, and therefore
9i 92
ﬁg (9.7:
Furthermore, f (13,0) and g(0, y) should represent some suitable growth law for each
population in the absence of the other. Types of Equilibria. The matrix of the linearized system is (x,y)<0, (x,y)<0, x>0,y>0~ (< 0)
_ f(x, :1) + mm) xfy(w, y)
A _ ( ygx(x,y) 9073,21) + ygy(x,y))(xw,ym)’ (< 0)
and the possible equilibria are: (0, O) : We have 11 = uf(0, 0), 1') = vg(0, O) and A : (“(30) gage» ' Our basic assumption tells us that u should increase if v = 0, and similarly for u, so we
must have f(070) > 0, NM) > 0, and therefore (0,0) is an unstable node. The growth of X in the absence of Y (y = 0) is governed by the equation 17: = a: f (x, 0),
so there could be the possibility of a stable equilibrium K (carrying capacity). This would
imply that f(K,O) = 0 and Kf’(K,0) = wa(K,0) < 0. Note: there doesn’t have
to be such a K e.g., d: = a2: — bxy, a,b positive and if y = 0 then 1': = ax which is
exponential growth. (K, O) : Then
(< 0)
A = (0 + KMK, 0) K fy(K, 0))
0 g(K, 0) ' It could be that g(K, 0) = 0 in which case (K, 0) would be a degenerate stable node, but
we will ignore that possibility. Then we expect that if (K, 0) were stable then competition 5 Linear and Nonlinear Systems 101 would favor X whereas Y would perish, so
g(K, 0) < 0 => (K, 0) a stable node. But if (K , 0) were unstable then for .7: near K and y small, population Y would increase,
so g(K,0) > 0 => (K, 0) a saddle, (0, J) : Here J plays a similar role as K for X, and the analysis is the same with
g(0, J) replacing f(K, 0). We have made considerable inroads on what the phase plane portrait will look like
with minimal assumptions on the nature of f (z, y) and g(z, At this point the various
possibilities are: y y
I <_ (i) J \ (iv)
) \
X X X
0 K 0 K Now, the question becomes whether there exists one or more equilibrium points
(zomyoo), zoo > 0, yoo > 0, where f(zoo,yoo) = g(zoo,yoo) = 0. If such a point
were stable we have the case of coexistence; if it were unstable we have the case of
competitive exclusion. In a very complex model there could be both (this might also
occur in a simple model if some exterior factor like harvesting were imposed), so that for
instance when X is small enough both survive, but if it is too big it wins. The case where there is one equilibrium (9300,1100) can now be analyzed given speciﬁc
forms for f (as, y) and g(a:, y). The usual textbook models assume that the growth of each
population in the absence of the other is logistic: 9'5 = m (1 —  mm» 3) = 8y (1  — yﬂ(z,y)
where r, s, K, and J are all positive, and a(a:, y), [3(33, y) are positive functions for a: > 0, y > 0.
We will stop here with the competition case, but note that the previous graphs already say a lot. In cases (i) and (ii) there need not be a point (zoo, goo) so in (i) Y wins no
matter what, and in (ii) X wins. For graph (iii) we suspect that there would be a stable
equilibrium (zoo, goo), whereas in (iv) it would be an unstable one. PredatorPrey Models
Recall that the model is and the assumption that the per capita rate of growth of the prey X decreases as the
predator Y increases, whereas the per capita growth of Y increases as X increases. 102 Ordinary Differential Equations—A Brief Eclectic Tour Therefore
8f 39
ayﬁw) < 0, area) > 0, m > 0,11 > 0
and the matrix A is
(< 0)
A: mfy($>y) ) I
gammy) Way) + ygymy) hwy”) (> 0). Further natural assumptions would be that X increases if y = 0, hence the prey grows in '
the absence of predators, and if X is the sole or major food supply of Y, then Y goes
extinct when m = 0. These assumptions mean that the equilibrium point (0,0) must be a saddle. If the prey
population has a growth law that leads to a stable carrying capacity K, we do not expect
the predator population to grow extinct near K since it has lots of little creatures to munch
uponl. This suggests that an equilibrium point (K, 0) is another saddle. Given the above, we can already construct the phase plane portraits y y and now all that remains is to analyze the stability of any other equilibria. The ever
popular model is the Lotka—Volterra model whose equations are i=m—amy, y=—sy+ﬁmy where all parameters are positive. Then X grows exponentially in the absence of Y and
Y goes extinct in the absence of X. The equilibrium point (s/ﬁ,r/a) is a center for
the linearized model and it is preserved in the full model. This leads to the wonderful
harmonious universe phase plane portrait of ovoid shaped periodic solutions ﬁlling the
region m > 0, y > 0. The author has always been bothered by this model, picturing a situation where there
are two rabbits quivering with fear and 10" coyotes—not to worry things will improve. A
more tragic scenario would be if both rabbits were of the same sex. But setting ﬂippancy
aside, the Lotka—Volterra model lacks a requirement of a good model—structural stability.
This can be loosely interpreted as robustness of the model under small perturbations. For instance, modify the equation for X assuming that in the absence of Y its growth
is logistic with a very large carrying capacity K = 1/6, 6 > 0 and small. One obtains 1 However, an amusing possibility would be that if the prey population gets big enough it starts destroying
predators (the killer rabbits model). Then (K , 0) would be a stable node. 5 Linear and Nonlinear Systems 103 the equation for X a'r = rw(1 — em) — awy = rm — amy * ﬂag, which is certainly a small perturbation of the original equation. The new equilibrium point
(woo, goo) will be (5/3, r/a — es/ﬁa) which will be as close as we want to (s/ﬁ, r/a)
if e is small enough. But a little analysis shows that it will be a stable spiral or a stable
node for that range of 6 so the periodicity is lost. An interesting discussion of further
variants of the Lotka—Volterra model can be found in the book by Policing, Boggess, and
Arnold. 5 Harvesting If we consider the standard competition model with logistic growth for both populations,
and where their per capita rate of growth is reduced proportionally to the number of
competitors present, we have the model , w w=m(1—?) —awy = a:(r — rw/K — ay), y=sy(1—%)—ﬁwy = y(s  sy/J  396), where all parameters are positive. The linear curves in parentheses, called nullclines, are
graphed and we get the possible pictures y y
J J
r/ at
r/ at
or
0 X x
s/B K 0 K s/B
Coexistence 7 Competitive Exclusion where the equilibria are circled.
Now suppose the population Y is harvested at a constant rate H > 0 caused by hunting,
ﬁshing, or disease, for instance. The second equation becomes y=y(ssy/Jﬁw)H and the righthand side is a hyperbolic curve whose maximum moves downward as H is
increased. The successive pictures in the coexistence case look like 104 Ordinary Ditteremial Equations—A Brief Eclectic Tour y y y
r/ot r/on
r/a
0 x x x
K 0 K 0 K
H < Hc H = Hc H > He and note that the equilibria (0,0) and (K, 0) have moved below the aaxis. Along and
near the maxis we have 513% m —H < 0 so that once there are no equilibria in the positive
icyquadrant, then X wins all the marbles (or birdseed). The model is quite similar to
that for the onepopulation logistic model with harvesting discussed in Chapter 2. The critical value He of the harvest rate is easily found by solving a:
r—L—ay=0 K
for as, then substituting it into y(s—sy/J—ﬁ:c)—H=0 and solving the resulting quadratic equation. The value of H beyond which there will
not be real roots will be He. Example. A system with coexistence is $_ 1 im_1 ._ 4 1 1
462400 1033” y"y5 5003’ 103’“ with saddle points (200,0) and (0,400), and (momyoo) = (50,375) a stable node. To
determine the critical harvest H = He when the ypopulation is harvested, ﬁrst solve for {132 1 1 1 2 Next solve
4 1 1 2
——— —— 20 —— —H=
3’ [5 500 103 < 0 530] 0
which simpliﬁes to 2312 — 7503; + 1250H = 0. ‘Its roots are
_ 1 2 4
y _ 4 (750 :l: «750 10 H) and they will no longer be real when 104H > 7502 or H > 7502/104 = 56.25 = He. 5 Linear and Nonlinear Systems 105 One can apply harvesting to competitive exclusion cases and create coexistence, as well
as to predatorprey models to destabilize them. More complicated models than the logistic
type ones could make interesting projects, possibly leading one to drop mathematics
altogether and take up wildlife management. Happy Harvesting! 6 A Conservative Detour A pleasant respite from the onerous task of solving nonlinear equations in several variables
to ﬁnd equilibria, then linearizing, calculating eigenvalues, etc., is afforded by examining
conservative systems. No knowledge of political science is required, and the analysis
of the nature of equilibria in the twodimensional case is simply a matter of graphing a
function of one variable and locating its maxima, minima, or inﬂection points—a strictly
calculus exercise. Since the differential equations model linear or nonlinear, undamped,
massspring systems or pendulums, one might consider brieﬂy studying them prior to
embarking on the discussion of almost linear systems.
The stande equation for a one degree of freedom conservative system is i+f(m) =0, where m = m(t) is a scalar function. This could describe the motion of a particle along
a line or curve, where the restoring force f is only dependent on the displacement,
so there is no damping, i.e. no m' term present. Some simple examples are f = kzm,
the undamped oscillator, f 2 7‘5 sin m, the pendulum, or f = m + $3, a nonlinear
spring. We will assume that f is sufﬁciently nice so that its antiderivative 2
f1 f (s)ds is deﬁned and satisﬁes F’ = f Now multiply the differential equation
by z’ + f(m)m' = 0, then integrate to obtain the energy equation: i2 2 This is the key equation; the first term on the left is a measure of the kinetic energy per unit
mass, and the second is a measure of the potential energy per unit mass. Consequently,
the energy equation says that along a solution + F = C, a constant. (kinetic energy) + (potential energy) = constant, which is the deﬁning characteristic of conservative systems—their total energy is constant
along trajectories.
Moving to the phase plane, where (1:, = (m, y), the energy equation becomes 1
E(w,y) = 5212 + F(w) = C, and we can deﬁne z = E(m,y) as the energy surface. Its level curves E (m, y) = C, in 106 Ordinary Differential Equations—A Briei Eclectic Tour the phase plane, are therefore the trajectories of the twodimensional system j3=yr For the simple oscillator f(:1:) 51:, so F(:I:) = §x2, and the energy surface is the
paraboloid z = 5112 + @132 whose level curves are circles §y2 + $432 = C, reﬂecting the
fact that the equilibrium point (0,0) is a center. The diagram below shows the values of
the total energy 0 = K1,K2. The equilibria of a conservative system are points (5:, 0) where f (37:) = 0, and since
F ’(x) = f they are critical points of F(:c); this is the key to the analysis. Rewrite
the energy equation as éy2=C~F(x) or y=::\/2_ \/C—F($), and then compute the values of y near x = :E for selected values of the total energy 0.
For each x and C there will be two such values, in view of the :I: sign, which tells us
that trajectories will be symmetric about the x—axis. For gorgeous, exact graphs this plotting can be done using graphing technology, but to
sketch trajectories and analyze the stability of (:E, 0), we only need eyeball estimates of
the quantity ‘/C — F(:I:) near the critical point 5'. The various possibilities are: (i) F(:i') is a local maximum. Plot 2 = F(:c) near :5 = i: then select values 2 =
K1, K2, . . ., of the total energy and intersect the horizontal lines 2 = K1 with the
graph. Align an (13,31) graph below the (2:, z) graph then plot or estimate the points
(x, ::\/2_ x/Ki — F(:I:)). We see that (5:, 0) is a saddle point. (ii) F(:Z') is a local minimum. This case is sometimes referred to as a potential well or
sink. Proceed as in i) and we see that (10) is a center. 5 Linear and Nonlinear Systems 107 K2
m
i) ii) (iii) (:3, is a point of inﬂection. The trajectory through (:3, 0) will have a cusp and
it is sometimes called a degenerate saddle point—it is unstable. Z
K3 _ _ _ _ _ _ . _ _ . _ _ _ _ _ _ _ _ _ _ _ _ _ _ __
K2 _ _ _ _ . _ . _ _ _ _ _ _ _ _ _ _ _ _ __
i
I »
K1 _ _ _ _ _ _ _ _ _ _ _ __;_ _ _ . _ _ _ _ _ _ _ __
I
I
. _ x
0 xi pet
Y
>1 108 Ordinary Differential Equations—A Brief Eclectic Tour Since i), ii) and iii) are the only possibilities we can conclude that conservative systems
51? + f(:1:) = 0 can only have equilibria which are saddle points (unstable) or centers
(stable), and there will be no asymptotically stable equilibria. To plot a speciﬁc trajectory, given initial conditions ($(to)ai7(to)) = ($(to)ay(t0)) = (170,340),
use the fact that its total energy is constant. Hence 1 E03731): 2 1
g? + Fm = 52/3 + Foo)
is the required implicit representation. Examples. We start off with the classic 1. it: + i sina: = 0, the undamped pendulum. Then f(:z:) = ii sinm so F(:z:) =
—% cosa: which has local minima at If : i2n7r and local maxima at it = i(2n + 1)7r,
n = 0, 1, 2, . . . . Given total energy C, trajectories will satisfy the equation yziﬁ C+%cosm, and the picture is: This is a local picture which shows that a moderate perturbation from rest (:1: = y = 0)
will result in a stable, periodic orbit, whereas a small perturbation from a: = in, y = 0,
the pendulum balanced on end, results in instability. A more complete picture of the
phase plane looks like this 5 Linear and Nonlinear Systems 109 The unbounded trajectories at the top and bottom of the graph represent trajectories
whose energy is so large that the pendulum never comes to rest. For 0 < C' < % there is
a potential well centered at (0,0), which implies that initial conditions lw(0) < 7r, mam = 131(0): < x/5 %+ %cosw<o), will give a periodic orbit around the origin. Remark. A potential well has a nice physical interpretation if we think of a bead sliding
along a frictionless wire. y I
I

dt
I y=a=+xl2 \lK—F(x)
0 x0 x1 If the bead is started at the point (:50, K) = (:50, F ($0)) in the :czplane, corresponding to
the point (:50, 0) in the myplane, its velocity and kinetic energy are zero, and its potential
energy is F(:c0). As the bead slides down the wire its kinetic energy increases, reaches a
maximum, then returns to zero at the point (:51, K) = (:51, F(:c1)) corresponding to the
point (:51, 0) in the :cyplane. There it will reverse direction (y = 5% < 0) and return to
the point (:50, K). 110 Ordinary Differential Equations—A Brie! Eclectic Tour 2. 53 + 4% (a: Z %) = 0, the approximate pendulum with sina: replaced by the approx
2: imation a: — —6—, valid for small Then $3 f(z)=%(a:—F) and F(z)=%(z2—%) which has a local minimum at a: = 0, and maxima at a: = ix/g, with F(:i:\/€_i) = 3g/2L.
Periodic motions will exist for M0)! < x/é, Mon ; [11(0): < @ 3 — mm + rig"), and the picture is the following one: 3.5+a:+a:2=0,so 2 3 = a:+a:2, and F(a:) = %— + 0%
which has a local minimum at a: = 0, a local maximum at a: = *1, and F(—1) =. 1/6. There is a potential well at a: = 0 but it will not be symmetric as in the previous two
examples. 5 Linear and Nonlinear Systems 111 The reader might want to look at examples like a) f($) = m—iz, b) f(a:) = a: — 1 — A99, 0 < /\ < 1/4, but examine the case i S A,
—1 ifo —1 C) 0 iflajl<1
1 ifo 1. An important question is the following one: given a closed orbit of a nonlinear system,
representing a periodic solution, what is its period T? For most autonomous systems this
is a difﬁcult analytic/numeric question. If one is able to obtain a point (930,310) on the
orbit, then one can numerically chase around the orbit and hope to return close to the
initial point, and then estimate the time it took. If the orbit is an isolated limit cycle even
ﬁnding an (930,310) may be a formidable task. But for twodimensional conservative systems it is easy to ﬁnd an expression for the
period. We will assume for simplicity that the orbit is symmetric with respect to the
y—axis so it looks like this: If the equation is £13 + f = 0, then when a: = a, 9': = y' = 0 so the energy equation is $2 E(a:,y) = E(a:, = 2 + FOB) = F01)» F’($) = f0”), 112 Ordinary Diﬂerential Equations—A Brief Eclectic Tour 01'
dm 12 / . We have used the + sign since we will compute in the region i = y 2 0. The last
equation implies that
dt = 4 dm
V5 (F(a)  F070)” ’
and if we integrate along the orbit from m = 0 to m = a we get 1/4 of the time it takes to traverse the orbit, hence [14“
o \/§(F(a)F(ﬂv))1/2
1 =” 0 (F(a)—F(m>>1/2d$' The expression for the period is a lovely one, but has a slightly venomous bite since
the integral is improper because the denominator vanishes at x = (1. Nevertheless, the period is ﬁnite so the integral must be ﬁnite—with care it can be evaluated numerically.
Let’s look at the pendulum equation Period = T(a) = 4 at + %sinm = 0, 30(0) = (1, 35(0) = 0, and some approximations: lst Approximation: sinm % m and we’ll play dumb and suppose we don’t know how to
2
solve it + 73—30 = 0. Then F(m) = 73—13 and ﬂd$=4\/ZSHI_1 E:lr1'227l' L
0 ﬁ(a__m_) g a 0 g 2 2 which is independent of a and valid only for small :1:. If L = 1m. and g = 9.8 m/sec2
then T % 2.007090. “ 1 o L(“—§3—Z§+%) We computed it for a = 1 / 2, 1, 3/2; the second column is the value for L = 1m. and g = 9.8 m/sec2.
1 L
T (5) = 2V2 ‘/3(2.257036) 2.039250 L
T(1) = 2V2 \/;(2.375832) 2.146583 T(a) = 2V2 L
T(3/2) = 2V2 ﬁ(1.628707) 2.375058 5 Linear and Nonlinear Systems 113 3rd Approximation: the full pendulum so F(z) = —% cosz and a 1
Ta =2\/§/ —d .
( ) 0 \fﬂcosz—cosaﬂ/z z T = 2J2 ‘/%(2.256657) 2.038907 T(1) = 2J2 ﬁmsssm) 2.140229 Then as above T(3/2) = N2 ‘/%(2.584125) 2.334777 For a large class of conservative systems there is a direct relation of solutions and periods
to Jacobi elliptic functions and elliptic integrals. The article by Kenneth Meyer gives a
nice discussion of this. If the reader is in a number crunching mood he or she might want to look at: a) f(z) = 2:3, with z(0) : a,:i:(0) = 0;a = 1,a = 24. b) The step function f (z) given in c) above and determine the minimum period of
oscillation. c) The example f (z) = z + $2 given previously; the limits of integration in the period
integral must be adjusted since the orbit is not symmetric in the zdirection. d) Compare the approximate and full pendulum’s period when a is close to 7r, say
a = 31/10, so the pendulum is nearly vertical. Finally, we want to brieﬂy look at the effect of damping on a conservative system.
When we examined the second order linear equations i+w2z=0 and i+cri3+w2z=0, c>0, we saw that the addition of the damping term orb resulted in all solutions approaching
zero as t —> 00. In the phase plane the origin, which was a stable center in the ﬁrst
equation, became an asymptotically stable spiral point or node. A simple phase plane
analysis for the damped pendulum fit+crb+%sinz:0, 0<c<2\/g/L, shows that the saddle points of the undamped pendulum are sustained, whereas the centers
become asymptotically stable spiral points. For the conservative system i +1 f (z) = 0 we can model the addition of damping by
considering the general equation 55 + 9(z,i)i + f(z) = 0, 114 Ordinary Diﬂerential Equations—A Briel Eclectic Tour where the middle term represents the effect of damping—g(a:, could be constant as in
the case above. Multiply the equation by :i: and transpose terms to obtain + : _g(xvj:)d:2a
and if F’ (x) = f (:13) this can be written as d '2 a + 1%)) = —g(:c,:i:):i:2. But the expression in parentheses is ~the energy equation, and if we let :i: = y then we
obtain the equation d _ = _ 2
thtv, :1) 90% 31):; which gives us a measure of the rate of growth of the energy of the system. For instance, if g(:c,y) > 0 for all x,y, the case of positive damping, e.g. g(a:, y) =
c > 0, then the energy of the system is strictly decreasing, so any nontrivial motion
eventually ceases—the system is dissipative. Example: For any conservative system it + f (x) = 0 the addition of a damping term
at, c > 0, results in the energy growth equation d 2 _E =_
dt ($.31) Cy so all motions eventually damp out. Similarly for the equation it + + f (x) = 0, we
have d _E =_ 2
dt (96,31) M31 and the behavior is the same.
On the other hand if g(:c,y) < 0 for all x,y, the case of negative damping, then the
energy of the system is increasing. The effect is that of an internal source pumping in energy.
But the most interesting case is one where both kinds of damping occur in different regions of the phase space, and the balance between energy loss and gain can result in
selfsustained oscillations or limit cycles. A simple example is the following one: 515+(x2+:i:2—1):i:+x=0, and the energy growth equation is d
—E(m,y) = ('Ic2 + 312  1)y2 dt When :62 +312 < 1 the damping is negative, so energy increases, whereas when :32 +312 >
1, the damping is positive and energy decreases. It is easily seen that a:(t) = cost is a
solution, which gives an isolated closed trajectory x2 + y2 = 1 in the phase plane, hence a limit cycle. 5 Linear and Nonlinear Systems 115 Positive The extensively studied Van der Pol equation
di+e(a:2—1)a'r:+a:=0, 6 >0, has the energy growth equation d __ 2 2
dt,E(m,y)— 6(00 1)y For Incl < 1 energy is increasing, whereas for > 1 it is decreasing, so we would
expect to ﬁnd selfsustained oscillations. It has been shown that for any 6 > 0 there
exists a stable limit cycle; the proof is complicated but elegant. For 6 < 0.1 it is closely approximated by z(t) = 2 cos t, hence is circular in shape, with period T E 27r, whereas
for 6 >> 1 it becomes quite jerky and is an example of a relaxation oscillation. Enough! The author’s detour has arrived at a point where it is best to return to the
main highway, and leave the reader to further explore the fascinating territory just brieﬂy
described. 116 Ordinary Diﬂerential Equations—A Brief Eclectic Tour 7 Stability and Gronwalling Preliminaries Up to this point, except for several nonautonomous ﬁrst order equations
in Chapter 2, the discussion of stability of solutions has been limited to an analysis of
the solution $(t) E 0 of constant coefficient second order equations, and the equilibrium
point (0,0) of the system 1': = Aw, where A is a 2 x 2 constant matrix. We want to move a little further afield and to do so we need a measure of distance or a norm in R".
First of all if a: = col(.1:1, . . . ,arn) is a point (vector) in R" then deﬁne the norm of a:
by n llgll = Zial 1 This is not the Euclidean norm, but is equivalent, is easier to work with, and has all the
required properties. Then we need a measure of the size of a matrix, so if A = (aij), i,j = 1,... ,n, is an n>< n matrix we deﬁne the normofAby
17.
“All = Z laml,
i,j=1 and it has the properties we need: A+B S “A” + IIBII, AB S AB,0A S CA for any Scalar 0, g for any vector 2:. Obviously, if A = A(t) is continuous, i.e. has continuous entries aij (t), then A(t) is
continuous. To precisely deﬁne the notion of stability and asymptotic stability of a solution a:(t),
deﬁned for to g t < 00, would require some deltaepsilonics, which the reader canNﬁnd
in any intermediate or advanced text. The following picture describes stability of the
solution $(t) : 1:0; we have selected a constant solution for artistic simplicity. . Stability What the picture means is that given an arbitrary e > O we can construct a “tube” of
radius 6 around a:(t) = 2:0, and any solution $(t) satisfying a:(t0) = 1:1, where 1:1 is
in a smaller shaded disc centered at (t0,.1:0), will be deﬁned agd stay inN the tube for all
t Z to. Once close—it stays close. N 5 Linear and Nonlinear Systems 117 The above deﬁnition does not prevent a solution from entering the tube at a later time
than to then leaving it. To preclude this we need the much stronger notion of uniform
stability which we will not discuss. For asymptotic stability of the solution z(t) = mg the picture is this one: X 50> = to Asymptotic Stability We can say that x(t) once close—it stays close, and furthermore the diameter of the tube
goes to zero as t —> 00, so lim :3 t = :1; . t—>oo N( ) NO
The reader’s mind’s eye can now substitute a nonconstant solution x(t) satisfying a:(t0) = x0, and construct a wiggly tube around it to infer the notion of what is meant by stability
and asymptotic stability of the solution a:(t). Stability for Linear Systems For the linear systems Aﬁ, A = (aij), z',j = 1,... ,n, or a': = A(t)§,
A(t) = (a¢j(t)), z',j = 1,... ,n, where each aij(t) is continuous for t 2 to, we know
the solution £(t) of the IVP (*) i = Ax or :i = A(t)a:, a:(t0) = .730,
is given via the fundamental matrix <I>(t) = (<Pij(t)),
$(t) = ¢(t):l}o, Q60) = I, and the columns of <I>(t) are a fundamental system of solutions.
From the last statement we can imply several things about <I>(t): (i) If every solution of (*) is bounded for t 2 to, that means that every entry (Pij (t) is
bounded, so there exists some constant M > 0 such that g M for t 2 to. (ii) In the case of A(t) = A suppose every eigenvalue of A has a negative real part. A
little cogitation of what this implies about each entry (pij (t) will convince one that
there exists constants R > 0, a > 0 such that <I>(t) g 126’“ for t 2 to 2 0. 118 Ordinary Diﬂerential Equations—A Brief Eclectic Tour Example. A = (—14 a fundamental matrix is (8:3; ‘fat'He—at) = 9(t). —e —te_3‘ Multiply by Q(0)1 : (fl 1) to get <I>(t) satisfying <I>(0)'= I,
e—3t _ te—3t _te—3t
(Mt) = ( te—St 6—32: + te—3t) '
Then = €_3t —— te_3t + te_3t + I —— tea—3% + Ira—3t + tea3t] 5 26‘3"5 + 4te‘3t. A little analysis shows that if 0 g a < 2.63 then 62““ > tea—3t for t > 0, so let a = 1
then HM)“ S 26‘3‘ + 4e“t s 6et. The ﬁrst case of stability to logically consider is the general one A = A(t), but we
must know the structure of <I>(t) which will usually be very difﬁcult to ﬁnd. Here’s a sometimes helpful result: If all solutions of :t = A(t):v, t 2 to, are bounded then they are stable. Let 3:1(t) be the solution satisfying x1(t0) = 9:0, and 9:2(t) be the solution satisfying
9:2(to) = 9:1. Since all solutions are bounded we know g M, so let H1130 — 9:1“ <
e/M and the simple estimate nggm — gin)” = newxgl — go)” s H<I>(t)lngr — gen 3 My — gen < e gives the desired conclusion. For a constant matrix A things get better: If all the eigenvalues of A have negative real parts then every solution of 5b = Az
is asymptotically stable. Just replace the constant M in the previous estimate with Re‘”, R > 0, a > 0 to obtain
H520)  gut)“ S Re‘mllgi  go“ This shows that solutions are bounded since H270)“ S Rea—“tum” and furthermore tﬁIgﬂzit) ’ $10)) = 0
Way back in Chapter 2 we introduced Gronwall’s Lemma and touted it as a very useful
tool to study stability. To live up to our claim we examine the nonautonomous system g=(A+C<t»g, g<o>=g; 0(t>=(ci,~(t», m=1,...,n, where each cij(t) is continuous for 0 S t < 00. Our intuition would tell us that if C(t)
were “small” in some sense, the behavior of solutions would mimic those of the system
:5: = Act. We follow that line of reasoning. N First, recall that for the general IVP a': 2 Ax + B(t), 55(0) = 0:0 5 Linear and Nonlinear Systems 119 the solution is given by the variation of parameters formula = <I>(t)£o +‘/(; <I>(t — s)§(s)ds. The trick is to rewrite the system 3': = (A + C(t))x as :b = Ax + C(t):1: and let the last
term be our B Then we can express its solution as the integral equation £(t) = <I>(t)go +/(; <I>(t — s)C(s)g(s)ds which is ripe for Gronwalling.
Suppose all the eigenvalues of A have negative real parts, then <I>(t) S Re‘o‘t, for
some positive a and R and t 2 0. Take norms in the previous equation Hg(t)l S l<1f(t)lgoll+/0 “(Ws)l0(8)lg(s)lld8 t
s Re_atll$oll + / Re"“(“”’llC(s)llllw(s)llds,
N 0 ~
and multiplying through by 6‘“ gives t
llg(t)lle“‘ 3 Bugs“ + RHC(S)Hllg(S)le”dS Now apply Gronwall’s Lemma t
llg(t)lle"’ s Rugonexp [R C(s)llds] 00
s Rugonexp [12/ teams] ,
0 01‘ llg(t)ll s RllgoHe—“t exp [R / °° llC(s)Hds] . 0
We conclude that if the integral f0°° C(.9)Hd.s is ﬁnite then i) All solutions are bounded, hence stable, ii) All solutions approach zero as t —> 00 since a > 0. But the system is linear and homogeneous, so the difference of any two solutions is a
solution. We can conclude Given 3': = (A + C(t))a:, where C(t) is a continuous matrix for 0 S t < 00. If the eigenvalues of A have negative real parts and 13” C(t)dt < 00, then all
solutions are asymptotically stable. Example. 513 + (2 + :i: + a: = 0 can be expressed as the system (3) = (5’1 —2 (Z) = [(31 32) + (3 1%)] (ii) 120 Ordinary Differential Equations—A Brief Eclectic Tour The matrix A has a double eigenvalue °° 1
= _1 dt
A , and f0 1 + t2 is ﬁnite so all solutions are asymptotically stable. The conditions on the growth of C (t) can be weakened, but the intent was to demonstrate
a simple application of Gronwall’s Lemma, not give the deﬁnitive result. Stability for Nonlinear Systems
At this point in the development of potential ODE practitioners, their only likely exposure to nonlinear systems are to those sometimes referred to as almost linear systems.
They may come in different guises such as conservative or Hamiltonian systems, but the
basic structure is the following one, where we stick with n = 2 for simplicity: Given the autonomous system :i: = P(a:,y), 3) = 6203,31): where P and Q are smooth functions, and (0,0) is an isolated equilibrium point.
Then since P(0, 0) = Q(0, 0) = 0 we can apply Taylor’s Theorem and write ‘ _ higher order terms )
(L) h' h d t
. _ 1g er or er erms
y_Q,(0,0)a,+Qy(0,0)y+( inxandy The higher order terms will be second degree or higher polynomials in a: and y.
If the equilibrium point is ($0,310) 7é (0, 0) we can make the change of variables $=u+$m y:v+y07 and this will convert the system (L) to one in u and v with (0, 0) an equilibrium
point; the ﬁrst partial derivatives will be evaluated at ($0,310). The system (L),
neglecting the higher order terms, is the linearized system corresponding to the
original system, and we use it to study the stability of (0,0). If we let a: = col(a:1, . . . ,mn) we can write a more general form of (L): where A is a constant 72 x n matrix and f (x) is higher order terms, which may be inﬁnite
in number, as in the Taylor series expansion. What is important is that f satisﬁes IIngMI/ng a o as 1ng 0 and this relation is independent of the choice of norm. Systems of this form are called
zlmost linear systems. 5 Linear and Nonlinear Systems 121 Examples.
a) Suppose the higher order terms (H.O.T. if you want to appear hip) are :52 + 2:51; + y“.
Since
1 S { 1/lrcl
lxl + lyl 1/ lyl
then HfQN==n2+am+yﬂ<a2+mdwrwm4
Mn m+m ‘ M+W 9:2 29: 4
L+ lllyl+ﬂ=3lxl+lyl3 _ '95! M I!!!
which approaches 0 as a: + lyl —+ 0. b) But one could use the Euclidean norm 7' = Vac? + y2 = then convert to polar
coordinates, a: = rcos 0, y = rsin 0, to obtain _ r2cos20+2r2 cosOsin0+7'4sin4 0 nan r 2 2 4
37' +27‘ +7‘
7 =3r+r3—>0 as r—>0. c) The undamped pendulum £15 + % sina: : 0 can be written as the system a': = y, y = —%sina: = —%a:+ — sinsc) and
m—ma<w—a—%+mn
lxl + lyl ‘ lwl
d) Not all nonlinearities satisfying the above limit need to be smooth polynomials. The
second order equation 515 + 29': + a: + 154/3 = 0 becomes the system —>0 as ac+y—>0. izyv y=_$_2y_$4/3a and
f(g) Ian‘s/3 Ilgll = :c1/3 —> 0 as + —+ 0. Remark. This is an opportunity to point out the handy “little 0” notation: f (93)” =
swan) as llgll a 0 means f(g)ll/llgll a o as Hall a 0. Note that = o(:c) as —> 0 implies f(0) = 0 which means a:(t) E U
is a solution of the system = Arc :— f (an), or 0 is an equilibNrium point of m2 systen:
Under these more general growth conditions on f~(:c) we would expect that if the solution
g(t) E g (or equivalently the equilibrium pointNEO = g) of the linearized system is 122 Ordinary Diﬂerential Equations—A Brief Eclectic Tour asymptotically stable, then so is the solution a:(t) E 0 of the nonlinear (almost linear) system.
We sketch a proof of this conjecture which seems a nice way to end this ODE sojourn. First of all, if <I>(t) is the fundamental matrix associated with the system a': = Am, !I>(O) = I, then we can employ the same trick used in the proof of the previous stability
result. If .1:(t) is the solution satisfying the LC. 12(0) = we then it satisﬁes the integral equation 50?) = <I>(t)go + [O M — s)f(g(s))ds The expression is ripe for Gronwalling but we ﬁrst need some assumptions and estimates.
Assume the eigenvalues of A all have negative real parts, so there exist positive
constants R,a such that tI>(t)[ g Rea—“t, t 2 0. Then “5a)” 3 Rea—“tllgoll + Re‘“(t“)llf(g(s))llds 01' Hg(t)le‘“ 3 Hugo” + Remunysnuds. Next note that the growth condition f = 0( implies that given any 6 > 0
(the fearsome e at last!) there exists a 6~> 0 (and ~its daunting side kick!) such that
lf(g) < Ellgll if llxll < 5 Since z(t) is assumed to be close to the zero solution, we can let H.730” < 6. But a:(t) is
continuous, so z(t) < 6 for some interval 0 S t _<_ T, and the great Gronwall moment
has arrived! We substitute eHa:(s)H for f(a:(s))H in the previous estimate, Gronwallize,
and obtain N t
llg(t)lle“‘SRllgollexp[ eRdt], 05t<T, 01' Hg(t)H S Rllgolle(5R_a)t, 0 g t < T, the crucial estimate.
Since 6 > O is at our disposal and so is .730, let 6 < a/R and choose .730 satisfying “2:0” < 6/2R. The estimate gives us that < 6/2 for 0 g t < T, and we can
apply a continuation argument interval by interval to infer that i) Given any solution z(t), satisfying z(0) = we where “.720” < 6 /2R, it is deﬁned for
all t Z 0 and satisﬁes Ha:(t) < 6/2. Since 6 can be made as small as desired this
means the solution a:(t) E 0 (or the equilibrium point 0) is stable. ii) Since 6R — a < 0 this implies that t1_iym ]z(t) = 0 hence a:(t) E 0 is asymptotically
00 ~ ~
stable. We have proved a version of the cornerstone theorem, due to Perron and Poincare, for
almost linear systems: 5 Linear and Nonlinear Systems 123 Given (i = Ax + f(:c) where f is continuous for x < a, a > 0, and
Hf(:c) = 0(HzH) as —> 0. If all the eigenvalues of A have negative real
parts then the solution z(t) E 0 is asymptotically stable. Remark. i) The result can be generalized to the case where f = f (t,x), is assumed to be
continuous for “cell < a, 0 S t < 00, and the estimate Hﬂtw)“ = 0(x) as —» 0 is uniform in t. For example, [2:2 cost] = o(lz) as lx Z» 0, uniforinly in
t,~since z2 cost‘l S ii) By considering the case t —> —00 and assuming all the eigenvalues of A have positive real parts, the proof above implies that if ]f(:c)]] = 0(llxll) as —> 0 then the
solution z(t) E 0 is unstable. iii) For the two—dimensional case it follows that if the origin is a stable (unstable) spiral
for the linear system, it is a stable (unstable) spiral for the almost linear system,
and similarly for stable (unstable) nodes. It can be shown that the saddle point
conﬁguration is preserved, whereas a center may remain a center or be transformed
into a stable or unstable spiral. The Perron/Poincaré theorem gives credence to the assertion that when we linearize
around an equilibrium point {to by using Taylor’s Theorem, the asymptotic stability or
instability of the linear system: is preserved. The theorem is a ﬁtting milestone at which
to end this brief, eclectic journey. FINALE ‘ To those who have snarled and staggered this far, the author hopes you have been
stimulated by this small tome, and will continually rethink your approach to the subject. And ALWAYS REMEMBER
The subject is ORDINARY DIFFERENTIAL EQUATIONS
and NOT Algebra Calculus Linear Algebra Numerical Analysis Computer Science 125 References I (And other tracts the author has enjoyed, with brief commentary.) D. Acheson, From Calculus to Chaos, An Introduction to Dynamics, Oxford University
Press, Oxford, 1997. A delightful book to waltz through the calculus, and the ﬁnal chapters are a nice
introduction to ODEs and chaotic systems. W.E. Boyce and RC. DiPrima, Elementary Differential Equations and Boundary Value
Problems, 6th Ed. Wiley, New York, 1997. Old timers will remember the thin, classic grandfathers of this book, which has now
grown plump like its contemporaries. But it is still an excellent introductory text. F. Brauer and J .A. Nobel, The Qualitative Theory of Ordinary Differential Equations, An
Introduction, W.A. Benjamin, 1969; (reprinted) Dover Publications, New York, 1989. One of the ﬁrst textbooks on the modern theory; the discussion of Lyapunov theory is
excellent. ’ F. Brauer and DA. Sanchez, Constant rate population harvesting: equilibrium and
stability, Theor. Population Biology 8 (1975), 12—30. Probably the ﬁrst theoretical paper (but readable) to discuss the effects of harvesting 0n
equations of population growth. Sandhill Cranes are featured. M. Braun, Diﬂerential Equations and Their Applications: An Introduction to Applied
Mathematics, 4th Ed. SpringerVerlag, New York, 1992. A well written introductory textbook with a solid emphasis on applications. The
discussion of art forgery and Carbon14 dating is unsurpassed. R.L. Burden and JD. Faires, Numerical Analysis, 5th Ed. PWSKent, Boston, 1993. An easy to read introduction to numerical methods with wide coverage and good examples. 127 128 Ordinary Differential Equations—A Brief Eclectic Tour G.F. Carrier and CE. Pearson, Ordinary Diﬁ‘erential Equations, Blaisdell, 1968; (reprinted)
Soc. for Industrial and Applied Mathematics, Philadelphia, 1991. Snapshotlike coverage of a lot of classic topics important in applications, with problems
that are formidable. F. Diacu, An Introduction to Diﬁ‘erential Equations, Order and Chaos, W.H. Freeman,
New York, 2000. A recent text which bucks the current trend towards obesity, and has a very modern ﬂavor.
If you believe an introductory book should cover 11 1/2 semesters of topics, like the
original Boyce and DiPrima text, look at this one. K.O. Friedrichs, Advanced Ordinary Diﬁ‘erential Equations, Gordon and Breach, New
York, 1966. One of the volumes of the old Courant Institute Notes which were all great contributions
to modern mathematics. W. Hurewicz, Lectures Ordinary Differential Equations, MIT Press, Cambridge, 1958. Truly a classic! This little paperback was one of the ﬁrst books to introduce the modern
theory, as developed by the Soviet school, to Englishspeaking audiences. D.W. Jordan and P. Smith, Nonlinear Ordinary Diﬁ‘erential Equations, Oxford University
Press, Oxford, 1987. Lots of wonderful examples of nonlinear systems and the techniques needed to study their
behavior—a very handy book to have on the shelf. E. Kamke, Diﬁ‘erentialgleichungen, Losungsmethoden und Losungen, Chelsea, New York,
1948. If it can be solved, no matter how weird it looks, it’s probably in here! A more recent
example of the same mad pursuit is a handbook by AD. Polyanin and V.F. Zaitsev
(CRC Press, 1995) which has over 5000 ODEs and their solutions, including 267 Riccati equations ! W.D. Lakin and DA. sanchez, Topics in Ordinary Differential Equations, Prindle, Weber
and Schmidt, 1970; (reprinted) Dover Publications, New York, 1982. Covers asymptotics, singular perturbation, nonlinear oscillations, and the relation between boundary value problems and elementary calculus of variations. A.C. Lazer and DA. sanchez, Periodic equilibria under periodic harvesting, Math.
Magazine 57 (1984), 156—158. The authors thought this would be a nice paper for a student journal, consequently, its
useful major result is almost unknown to practicing mathematical ecologists. J .E. Leonard, The matrix exponential, SIAM Review 38 (1996), 507—512. If you really want to know how to compute exp(tA) here’s a good way to do it. 5 References 129 L.N. Long and H. Weiss, The velocity dependence of aerodynamic drag: a primer for
mathematicians, American Mathematical Monthly 106 (1999), 127—135. Irnparts some wisdom into the oft stated and misused assumption that the air resistance
of a falling body is proportional to the square of its velocity. R.E. O’Malley, Jr., Thinking About Ordinary Diﬁerential Equations, Cambridge Univer
sity Press, Cambridge, 1997. A small paperback which is an excellent introduction, with ﬁne problems, and of course,
O’Malley’s favorite topic, singular perturbations, gets major billing. R.E. O’Malley, Jr., Give your ODEs a singular perturbationl, Jour: Math. Analysis and
Applications 25 (2000), 433—450. If you want a ﬂavor of singular perturbations, this paper is a very nice collection of
examples, starting with the simplest ones. K.R. Meyer, Jacobi elliptic functions from a dynamical systems point of view, American
Mathematical Monthly 108 (2001), 729—737. .146
The solutions and periods of conservative systems are intimately connected to elliptic
functions and elliptic integrals. The author gives a ﬁrst—rate introduction to this connec—
tion. V. Pliss, Nonlocal Problems of the Theory of Oscillations, Academic Press, New York,
1966. A leading Soviet theorist gives a thorough introduction to the subject. It is an older
treatment but is one of the few books that discusses polynomial equations. J. Polking, A. Boggess, and D. Arnold, Differential Equations, Prentice Hall, Upper
Saddle River, 2001. A recent big, introductory book, with a nice interplay between computing and theory,
plus a very good discussion of population models. D.A. Sénchez, Ordinary Diﬁerential Equations and Stability Theory: An Introduction,
W.H. Freeman, 1968; (reprinted) Dover Publications, New York, 1979. The author’s attempt, when he was a newly—rninted PhD to write a compact introduction
to the modern theory, suitable for a graduate student considering doing ODE research, or
for an undergraduate honors course. Its price may help maintain its popularity. D.A. Sanchez, Ordinary differential equations texts, American Mathematical Monthly 105
(1998), 377—383. A somewhat acerbic review of the topics in a collection of current leviathan ODE texts.
The review played a large role in the inspiration to write the current book. D.A. Sanchez, R.C. Allen, Jr., and WT. Kyner, Diﬁerential Equations, AddisonWesley,
Reading, 1988.
Yes, this was probably one of the ﬁrst corpulent introductory texts, now interred, but made the important point that numerical methods could be effectively intertwined with
the discussion of the standard types of equations, and not as a stand—alone topic. R.I.P. 130 Ordinary Diﬂerential Equations—A Brief Eclectic Tour S.H. Strogatz, Nonlinear Dynamics and Chaos, AddisonWesley, Reading, 1994. Gives a very thorough introduction to the dynamical systems approach, with lots of good
examples and problems. S.P. Thompson, Calculus Made Easy, Martin Gardner (6d,), originally published 1910;
St. Martin’s Press, New York, 1998. This reference is here because in the 1970’s the author wrote a letter in the Notices of
the American Mathematical Society protesting the emergence of telephone directory size
calculus books. The creation of the Sylvanus P. Thompson Society, honoring the author
of this delightful, incisive, brief tome, was suggested to encourage the writing of slim
calculus texts. Perhaps a similar organization is needed today, considering the size of
ODE books. F. Verlhulst, Nonlinear Differential Equations and Dynamical Systems, SpringerVerlag,
Berlin, 1996. A very well written and thorough advanced textbook which bridges the gap between a ﬁrst
course and modern research literature in ODEs. The mix of qualitative and quantitative
theory is excellent with good problems and examples. Index A
almost linear systems, 120
always close, 15
amplification factor, 76
Annihilator Method, 72
asymptotic stability, 15, 117
attractor, 14 l B
beats, 77
boundary layer, 25, 51
Boundary Value Problems, 82
Brauer, Fred, 18 C
center, 97
characteristic polynomial, 64
coexistence, 101
comparison of coefﬁcients, 71, 92
comparison results, 31
Competition Models, 100
competitive exclusion, 101
conservative systems, 104
continuation, 6, 39
convolution integral, 24, 78
cycles, 94 D
damping, 68
degenerate saddle point, 107
dependence on initial conditions, 38 E
eigenvalue problem, 83
energy equation, 105
energy surface, 105 Euler’s Method, 44, 82
existence, 5 F feedbackcontrol, 27 ﬁnite escape time, 12 Friedrichs, K., 34 fundamental matrix, 20 fundamental pair of solutions, 58, 62 G
gain, 76
global error, 46
Gronwall’s Lemma, 38, 118 H harmonic oscillator, 67
harvesting, 18, 103
homespun stability criterion, 15, 84 I Improved Euler’s Method, 44
Incredible Shrinking 7', 41
inﬁnite series solutions, 85
initial condition, 4 initial value problem, 5, 53
integrating factors, 21 L Lazer, AC, 36 limit cycles, 94 linear equation, 20, 55 linear independence, 59
linearity, 57 linearized stability analysis, 15
little gnome, 45 131 Sﬂl' no I iﬂ local error, 46 Lotka—Volterra model, 102 U'
M
1 maximum interval of existence, 7, 13, 40 Method of Undetermined Coefﬁcients,
23, 71
Meyer, Kenneth, 113 N
nonhomogeneous term, is discontinuous,
23, 79
nonuniqueness, 12
norm, 116
nullclines, 103 O
O’Malley Jr., Robert, 25
once close, 15 P
particular sohition, 71
periodic, closed orbit, 111
periodic harvesting, 34
periodic solutions, 32
Perron/Poincaré theorem, 122
perturbation expansions, 86
phase line, 14
phase plane, 92
phaseamplitude representation, 67
Pliss, V., 30
polynomial differential equation, 30
potential well, 106
PredatorPrey Models, 99, 101
Principle of Superposition, 72
proper nodes, 98 R
reduction of order, 56, 64
relaxation oscillation, 115 «cornn'. repeller, 14
resonance, 78, 80
Riccati equation, 26
RKF methods, 47 S
saddle point, 96
self—sustained oscillations, 114
semistable equilibrium point, 15
sensitive dependence on initial condi
tions, 39
separable equation, 9
shooting method, 83
singular perturbations, 25
sink, 14
solution, 3
source, 14
stability, 15, 84, 116
stable node, 95, 97
stable spiral, 96
steady state solution, 76
structural stability, 102 T
Taylor series numerical methods, 37
trajectory, 93
transient solution, 76
turning points, 26 U
undamped pendulum, 108
uniqueness, 5 V
variation of parameters, 21, 71 W
Wronskian, 58, 62 "\
.ﬁm ' 1h?)
, .é,‘ﬂ\‘1£ 0': YECH~ 9
‘ b‘ ‘ a:J.¢h.w.12,I:I'l OC provides a brief, concise guide to the key concepts of ordinary differential equations. lt touches on important
topics such as stability, resonance, existence of periodic solutions. and the essential role of continuation of solutions. It
provides significant information in a concise and accessible manner. This much needed guide is an informal one. being conceptual rather than definitive, and more lighthearted than pedagogic. It is based on the author's twenty—five years of teaching and researching ordinary differential equations, during which time
he grappled with questions such as how to present the material to students effectively in the time available in a normal
school year. What are the key topics and theoretical foundations thatmust be covered,while still allowing time for an
examination of the rich'areas of nonlinear equations and stability theory? What are suggested strategies when confronted
with such possibly time consuming topics as numerical analysis, series solutions, or boundary value problems? This is not a textbook and has few suggested problems, but it contains an abundance of illuminating examples. It is intended
as a guide for the instructor, a supplementary text for an introductory or intermediate course. or a selfstudy text for
enthusiastic students. a The order of topics covered is the standard one used in most textbooks. but the discussion brings out their real conceptual l
underpinnings. A few novel items are included such as existence of periodic solutions of firsto:rder equations, general l
conditions forthe onset of resonance, and’harvesting of twopopulation models. FinallyGronWall‘s Lemma leads to
a proof of the Poincaré/Perron Theorem, the foundation of the analysis of almost linear systems. ' "l‘ ...
View
Full Document
 Spring '09
 aaa
 The Land

Click to edit the document details