This preview shows page 1. Sign up to view the full content.
Unformatted text preview: ' $ Set 20: Systems of First Order Linear ODEs
Part 1 Kyle A. Gallivan
Department of Mathematics
Florida State University Ordinary Differential Equations
Fall 2009
& 1 % ' $ Systems of Linear ODEs
• xi : R → R, i = 1, 2, 3
• Suppose x′ depends on xi , i = 1, 2, 3 linearly
i
x′ (t) = pi1 (t)x1 (t) + pi2 (t)x2 (t) + pi3 (t)x3 (t)
i
• System of 3 linear ODES deﬁning 3 functions
x′ (t) = p11 (t)x1 (t) + p12 (t)x2 (t) + p13 (t)x3 (t) + g1 (t)
1
x′ (t) = p21 (t)x1 (t) + p22 (t)x2 (t) + p23 (t)x3 (t) + g2 (t)
2
x′ (t) = p31 (t)x1 (t) + p32 (t)x2 (t) + p33 (t)x3 (t) + g3 (t)
3
& 2 % ' $ Matrix Form
x′ (t) = P (t)x(t) + g (t) p11 (t) p12 (t) p13 (t) P (t) = p21 (t) p31 (t) x1 (t) x2 (t)
x(t) = x3 (t)
& p22 (t) p23 (t) p32 (t) p33 (t) ′
x1 (t) ′
x′ (t)
x (t) = 2 x′ (t)
3 3 g1 (t) g2 (t)
g (t) = g3 (t) % ' $ Matrix Vector Multiplication x′ (t) = P (t)x(t) + g (t) p11 (t) p12 (t) p13 (t) p21 (t) x1 (t) + p22 (t) x2 (t) + p23 (t) x3 (t)
P (t)x(t) = p31 (t)
p32 (t)
p33 (t)
Linear combination of columns of the matrix with coefﬁcients given by
the corresponding elements of the vector.
& 4 % ' $ Existence and Uniqueness
Theorem 20.1 (Textbook page 359). If pij (t), 1 ≤ i, j ≤ n , and gi (t),
1 ≤ i ≤ n, are continuous on an open interval α < t < β then there
exists a unique solution x1 (t) = φ1 (t), . . . , xn (t) = φn (t) to the initial
value problem
x′ (t) = P (t)x(t) + g (t), x(t0 ) = x(0) for any α < t0 < β and x(0) ∈ Rn , that exists on the entire open interval. & 5 % ' $ Higher Order Equations
• A single higher order linear equation can be transformed into a
system of ﬁrst order equations.
• u′′′ + p(t)u′′ + q (t)u′ + r (t)u = f (t)
• x1 = u, x2 = x′ = u′ , x3 = x′ = u′′
1
2
• System of dimension 3
x′ (t) = x2 (t)
1
x′ (t) = x3 (t)
2
x′ (t) = −r (t)x1 (t) − q (t)x2 (t) − p(t)x3 (t) + f (t)
3
& 6 % ' Higher Order Equations $ x′ (t) = x2 (t)
1
x′ (t) = x3 (t)
2
x′ (t) = −r (t)x1 (t) − q (t)x2 (t) − p(t)x3 (t) + f (t)
3 x′ (t)
1 0 0
x1 (t)
0 x2 (t) + 0 1 −p(t)
x3 (t)
f (t) 1 x′ (t) = 0
0
2 x′ (t)
−r (t) −q (t)
3 x′ (t) = P (t)x(t) + g (t)
& 7 % ' $ Systems of ODEs
Given the ODE u(n) = F (t, u, u′ , . . . , u(n−1) ), the variable deﬁnitions
x1 = u, x2 = x′ = u′ , x3 = x′ = u′′ , . . . xn = x′ −1 = u(n−1)
1
2
n
convert the single nonlinear ODE of order n into a system of n nonlinear
ﬁrst order ODEs
x′ = x2
1
x′ = x3
2
.
.
.
x′ −1 = xn
n
x′ = F (t, x1 , x2 , . . . , xn−1 )
n
& 8 % ' $ Systems of ODEs
The general form of a system of ﬁrst order ODEs is
x′ = F1 (t, x1 , x2 , . . . , xn−1 , xn )
1
x′ = F2 (t, x1 , x2 , . . . , xn−1 , xn )
2
.
.
.
x′ −1 = Fn−1 (t, x1 , x2 , . . . , xn−1 , xn )
n
x′ = Fn (t, x1 , x2 , . . . , xn−1 , xn )
n & 9 % ' $ Linear Constant Coefﬁcient First Order System
Suppose P (t) is a constant matrix A and for simplicity take n = 2.
x′ (t) = P (t)x(t) + g (t) = Ax(t) + g (t) α
α12 11
A=
α21 α22 x(t) = & x1 (t)
x2 (t) x′ (t)
1 x′ (t) = x′ (t)
2 10 g (t) = g1 (t)
g2 (t) % ' Homogeneous Problem $ If g (t) = 0R the problem is an homogeneous linear constant coefﬁcient
ﬁrst order system. x′ (t) = Ax(t) α
α12 11
A=
α21 α22 x(t) = & x1 (t)
x2 (t) x′ (t)
1 x′ (t) = x′ (t)
2 11 % ' $ A Solution
Suppose we have a scalar r and a (constant( vector v ∈ R2 such that
Av = vr
i.e., the action of A on the vector v is equivalent to scaling by r .
r is an eigenvalue and v is an associated eigenvector of A. & 12 % ' A Solution $ Consider:
x(t) = vert x1 (t)
v1 ert
= x(t) = v2 ert
x2 (t) x′ (t)
v1 rert
1
= = x(t)r
x′ (t) = x′ (t)
v2 rert
2
x′ = x(t)r = vrert = Avert = Ax
So an eigenvalue, eigenvector pair solves the homogeneous problem. & 13 % ' $ General Solution
• one solution
x(t) = vert
• general solution to the homogeneous problem has a fundamental set
of solutions so that all solutions to the homogeenous system can be
written, for n = 2,
x(t) = x(1) (t)c1 + x(2) (t)c2
• What is the fundamental set of solutions for
x′ = Ax
& 14 % ' $ General Solution
• Want the general solution to the homogeneous system
x(t) = x(1) (t)c1 + x(2) (t)c2
• Need eigenvectors and eigenvalues
• Need linear independence
• We proceed as with scalar homogeneous problems but use
vectorvalued functions as solutions. & 15 % ' $ General Solution
• Suppose x(t) is a solution to x′ = Ax on the open interval α < t < β
• For any α < t0 < β x(t) is the unique solution with the value x(t0 ).
• Find unique constants so that x(t) = x(1) (t)c1 + x(2) (t)c2 for two
solutions x(1) (t) and x(2) (t) & 16 % ' $ General Solution
First note that given two solutions x(1) (t) and x(2) (t) and two constants
c1 and c2 we have
x(t) = x(1) (t)c1 + x(2) (t)c2
′ ′ x′ (t) = x(1) (t) c1 + x(2) (t) c2
= Ax(1) (t)c1 + Ax(2) (t)c2 = A x(1) (t)c1 + x(2) (t)c2
= Ax(t)
So x(t) = x(1) (t)c1 + x(2) (t)c2 is also a solution to the homogeneous
system.
& 17 % ' $ General Solution
For any solution x(t) to have this form we must have for any α < t0 < β
x(t) a unique pair of constantsc1 and c2 :
x(t0 ) = x(1) (t0 )c1 + x(2) (t0 )c2 (1)
(2)
c
x (t ) x1 (t0 )
x (t ) 1 1 0 = 1 0
(2)
(1)
c2
x2 (t0 ) x2 (t0 )
x2 (t0 )
X (t0 )c = x(t0 )
& 18 % ' $ Linear Algebra
Lemma 20.2. An n × n linear system of equations Av = b has a unique
solution v = A−1 b for any vector b ∈ Rn if any of the following
equivalent conditions are true:
• All of the eigenvalues of A are nonzero.
• det(A) = 0
• The linear system Av = 0 has a unique solution v = 0, i.e., the zero
vector in b ∈ Rn
• The columns of A are n linearly independent vectors.
Note that if Av = 0 has a solution v = 0 then the columns of A are called
linearly dependent.
& 19 % ' $ General Solution
Deﬁnition 20.1. x(1) (t) and x(2) (t) are a fundamental set of solutions to
the homogeneous system of ODEs x′ = Ax with n = 2 and
x(t) = x(1) (t)c1 + x(2) (t)c2 is its general solution if
• At any point, t0 , in α < t < β the determinant det(X (t0 )) = 0
where
X (t0 ) = x(1) (t0 ) x(2) (t0 ) ∈ R2×2
• or equivalently the vectors x(1) (t0 ) and x(2) (t0 ) are linearly
independent.
Note det(X (t)) = W [x(1) (t), x(2) (t)] is called the Wronskian of the set
of vectors.
& 20 % ' $ Fundamental Theorem
Theorem 20.3 (Textbook p. 387). If the vector functions
x(1) (t), . . . , x(n) (t) are linearly independent vectors at each point in an
open interval α < t < β and each solve
x′ = P (t)x
then any solution x(t) can be expressed as a unique linear combination of
the x(i) (t). The vector functions x(1) (t), . . . , x(n) (t) are a fundamental
set of solutions for the homogeneous ODE. & 21 % ' $ Fundamental Theorem
Theorem 20.4 (Textbook p. 387). If the vector functions
x(1) (t), . . . , x(n) (t) solve in an open interval α < t < β the ODE
x′ = P (t)x
then the Wronskian W [x(1) (t), . . . , x(n) (t)] either is identically 0 or
always nonzero on the entire interval.
In other words, the solutions are linearly independent vectors at all points
in the interval if they are linearly independent vectors at any single point. & 22 % ' Determinant Test n = 2 $ Deﬁnition 20.2. The determinant of the 2 × 2 matrix A is given by
det(A) = 1 det
2 1
det 2
& α11 α12 α21 α22 1 = α11 α22 − α21 α12 1
=
−3
2 1
−3 −2
1
=
−4
2 23 −2
−4 = −3 − 2 = −5 = −4 + 4 = 0 % ' $ Special Case for Two Vectors
• Note that two vectors, z1 and z2 are linearly depenendent if and only
if one is a scalar multiple of the other, i.e., z1 = αz2 .
• For n = 2 this means the matrix with z1 and z2 as its columns has a
determinant 0. & 24 % ' $ General Solution n = 2 Suppose we have two eigenvalue/eigenvector pairs, (v (1) , r1 ) and
(v (2) , r2 ) we therefore have (1)
(2)
v1 er1 t
v1 er2 t and x(2) (t) = v (2) er2 t = x(1) (t) = v (1) er1 t = (1)
(2) r2 t
v2 er1 t
v2 e
(1) det(X (t)) = v1 er1 t
(1) v2 er1 t
(1) (2) (2) v1 er2 t
(2) v2 er2 t
(2) (1) = v1 v2 − v1 v2 (1) (2) (2) (1) = v1 v2 e(r1 +r2 )t − v1 v2 e(r1 +r2 )t e(r1 +r2 )t = det(X (0))e(r1 +r2 )t So det(X (t)) = 0 ↔ det(X (0)) = 0 or equivalently, v (1) and v (2) are
linearly independent.
&
25 % ' $ Independence
Theorem 20.5. If λ1 = λ2 = · · · = λn are eigenvalues of A ∈ Rn×n
with eigenvectors v (1) , v (2) , . . . , v (n) , respectively, then
• the eigenvectors are linearly independent;
• the matrix V ∈ Rn×n whose ith column is v (i) is nonsingular, i.e.,
V −1 exists uniquely;
• The system of linear equations V c = b has a unique solution
c = V −1 b for any b ∈ Rn×n . & 26 % ' $ Fundamental Theorem
Theorem 20.6. The homogeneous linear constant coefﬁcient system of
ﬁrst order ODEs
x′ = Ax
has a fundamental set of solutions
x(i) (t) = v (i) eri t , 1 ≤ i ≤ n
and general solution
x(t) = x(1) (t)c1 + · · · + x(n) (t)cn
if (v (1) , r1 ), . . . , (v (n) , rn ) are eigenvector/eigenvalue pairs with
r1 = r2 = · · · = rn .
& 27 % ' Independent or Dependent $ Tests for linear independence or dependence of a set of vectors x(i) ∈ Rn
for i = 1, . . . , n
• Solve Xc = 0 for c.
– If there is a nonzero solution then the vectors are dependent. In
this case there will be more than one c = 0.
– If c = 0 is the only solution then the vectors are independent.
• Compute the determinant of X , det(X ).
– If det(X ) = 0 then the vectors are dependent.
– If det(X ) = 0 then the vectors are independent.
• Check the eigenvalues of X .
– If at least one eigenvalue is 0 then the vectors are dependent.
– If all eigenvalues are nonzero then the vectors are independent.
&
28 % ' $ Determinant Test
Deﬁnition 20.3. The determinant of the 2 × 2 matrix A is given by
det(A) = & α11 α12 α21 α22 29 = α11 α22 − α21 α12 % ' $ Example 1 det
2 1 det
2 1 1
=
−3
2 1
−3 −2
1
=
−4
2 −2
−4 = −3 − 2 = −5 = −4 + 4 = 0 Note it is also easy to see that one column is or is not a scalar mulitple of
the other.
& 30 % ' $ Eigenvalues and Eigenvector
Deﬁnition 20.4. If A is an n × n matrix then the nonzero vector x and
scalar λ are an eigenvector/eigenvector pair if
Ax = xλ
We have
Ax = xλ → (A − λI )x = 0
∴ det(A − λI ) = 0
& 31 % ' $ Eigenvalues and Eigenvector
• det(A − λI ) is a polynomial of degree n in the variable λ
• The eigenvalues are the roots of the polynomial.
• Any nonzero vector that solves, the system of equations
(A − λI )x = 0
is an eigenvector associated with λ.
• If x is an eigenvector associated with eigenvalue λ then so is αx for
any scalar α.
• If x1 and x2 = αx1 are eigenvectors associated with eigenvalue λ
then so is α1 x1 + α2 x2 . & 32 % ' $ Example
Example 4 Textbook page 379 det 3−λ
4 3 −1 A=
4 −2 −1 = −(3 − λ)(2 + λ) + 4
−2 − λ = λ2 − λ − 2 = (λ − 2)(λ + 1) = 0
∴ λ1 = 2
& and 33 λ2 = −1 % ' $ Example
Example 4 Textbook page 379 1 −1 A − 2I =
4 −4 1 4 add −4 times row 1 to row 2 −1  0
1 −1  0
→ → x1 = x2
−4  0
0 0 0 So any vector with x1 = x2 with x2 an arbitrary value is an eigenvector
for λ1 = 2.
& 34 % ' $ Example
Example 4 Textbook page 379 4 −1 A+I =
4 −1 4 4 add row 1 to row 2 −1  0
4 −1  0
→ → 4x1 = x2
−1  0
0 0 0 So any vector with x1 = 0.25x2 with x2 an arbitrary value is an
eigenvector for λ1 = −1.
& 35 % ' $ Independence
Note that
λ1 = 2, x(1) 1 =
1 x(1) and x(2) and λ2 = −1, x(2) 1 =
4 are linearly independent This is consistent with Theorem 20.5. & 36 % ' Initial Value Problem for the Example $ Suppose we impose the initial conditions x1 (0) = 1 and x2 (0) = 1 c
e2t e−t
(1)
(2) 1 x(t) = x (t)c1 + x (t)c2 =
c2
e2t 4e−t x (0)
1
11
c 1 = = 1
c2
x2 (0)
1
14
c1 = 1, & x(t) = x(1) (t) = c2 = 0 e2t
2t e 37 solves the IVP % ' $ Problems with n ≥ 3
• For this class exams will only cover ODE systems with n = 2.
• The textbook has discussions and examples of determining
eigenvalues/eigenvectors and solving linear systems with n = 3 and
n = 4. Read Sections 7.2 and 7.3.
• The rest of this set gives some examples.
• The homework will include other problems with n = 3. & 38 % ' $ Determinant Test
Deﬁnition 20.5. The determinant, det(A), of a 3 × 3 matrix A is given
by:
α11 α12 α13 det(A) = α21 α22 α23 α31 α32 α33 = (−1)i+1 det(Ai1 )αi1 + (−1)i+2 det(Ai2 )αi2 + (−1)i+3 det(Ai3 )αi3
= (−1)1+j det(A1j )α1j + (−1)2+j det(A2j )α2j + (−1)3+j det(A3j )α3j
where Aij is the 2 × 2 matrix resulting from removing row i and column
j from A.
& 39 % Example
Use row 1:
1 −2 det(A) = −1
2
= (−1) 1+1 (1) 2 −2 −1 −1 + (−1) 1+2 3 2 −2 −1 −1
(−2) −1 −2
2 = 0 + 10 − 9 = 1 = 0 40 −1 + (−1) 1+3 (3) −1 2 2 −1 Example
Use column 1:
1 −2 det(A) = −1
2
= (−1) 1+1 (1) 2 −2 −1 −1 + (−1) 2+1 3 2 −2 −1 −1
(−1) −2 −1 −1 =0+5−4=1=0 41 3 + (−1) 3+1 (2) −2 3 2 −2 Example
Use row 1:
1 −2 2
= (−1) 1+1 (1) 1 −2 −1 3 + (−1) 1+2 1 −2 −1 det(A) = −1 3
3 (−2) −1 −2
2 3 + (−1) = (3 − 2) + 2(−3 + 4) + 3(1 − 2)
=1+2−3=0 42 1+3 (3) −1 1 2 −1 ' $ Example
Find the eigenvalues. 3 2 2 1
A=
4
1 −2 −4 −1 & 3−λ det 1 −2 43 2 2 4−λ 1 −4 −1 − λ % ' $ Example det(A − λI ) = (−1) +(−1) 1+2 (2) 1 1+1 (3 − λ) 1 −2 −1 − λ + (−1) 4−λ 1 −4 −1 − λ 1+3 (2) 1 4−λ −2 −4 = (3 − λ)(λ2 − 3λ) + 2(λ − 1) + 2(−2λ + 4)
= −λ3 + 6λ2 − 11λ + 6
λ1 = 1, λ2 = 2, λ3 = 3
& 44 % ' $ Example
Let n = 3 and consider 1
−2
3 (1)
−1 , x(2) = 1 , x(3) = −2
x = 2
−1
−1
Solve the system 1 −1 2
& −2 3
c1
0 1 −2 c2 = 0 −1 −1
c3
0 45 % ' Example $ Use row combinations to transform system of equations (this also works
for nonzero righthand side vectors):
add row 1 to row 2; add −2 times row 1 to row 3 1 −2 3  0
1 −2 3  0 −1 1 −2  0 → 0 −1 1  0 2 −1 −1  0
0 3 −7  0 & multiply row 2 by −1 1 −2 3  0
1 −2 3  0 0 −1 1  0 → 0 1 −1  0 0 3 −7  0
0 3 −7  0 46 % ' $ Example add 3 times row 2 to row 3 1 −2 3  0
1 −2 3  0 0 1 −1  0 → 0 1 −1  0 0 3 −7  0
0 0 −4  0
divide row 3 by −4 1 −2 3  0
1 −2 3  0 0 1 −1  0 → 0 1 −1  0 0 0 −4  0
00
1 0
& 47 % ' $ Example 1 −2 0 1 00 3
c1
0 −1 c2 = 0 0
1
c3 c1 − 2c2 + 3c3 = 0
c2 c3 = 0
c3 = 0
c1 = c2 = c3 = 0 is unique solution so the vectors are independent.
& 48 % ' $ Example
Let n = 3 and consider 1
−2
3 (1)
−1 , x(2) = 1 , x(3) = −2
x = 2
−1
3
Solve the system 1 −1 2
& −2 3
c1
0 1 −2 c2 = 0 −1 3
c3
0 49 % ' Example $ due to the position modiﬁed in the matrix the same row combinations
transform system of equations: add row 1 to row 2; add −2 times row 1 to row 3 1 −2 3  0
1 −2 3  0 −1 1 −2  0 → 0 −1 1  0 2 −1 3  0
0 3 −3  0 & multiply row 2 by −1 1 −2 3  0
1 −2 3  0 0 −1 1  0 → 0 1 −1  0 0 3 −7  0
0 3 −3  0
50 % ' $ Example add 3 times row 2 to row 3 1 −2 3  0
1 −2 3  0 0 1 −1  0 → 0 1 −1  0 0 3 −3  0
00
0 0 & 51 % ' Example 1 −2 0 1 00 $ 3
c1
0 −1 c2 = 0 0
c3
0 c1 − 2c2 + 3c3 = 0
c2 − c3 = 0
0=0
c3 is arbitrary and then c1 and c2 follow.
So c1 = −1, c2 = 1, and c3 = 1 is a nonzero solution. The vectors are
dependent. & 52 % ' $ Example (1) x 1 = −1 , x(2) 2 1 −1 2
& −2
3 2 , x(3) = −2
= −1
−1 −2 3
c1
0 2 −2 c2 = 0 −1 −1
c3
0 53 % ' Example $ add row 1 to row 2; add −2 times row 1 to row 3 1 −2 3  0
1 −2 3  0 −1 2 −2  0 → 0 0
1  0 2 −1 −1  0
0 3 −7  0
swap rows 2 and 3 1 −2 3  0
1 −2 3  0 0 0 → 0 3 −7  0
1  0 0 3 −7  0
00
1 0
& 54 % ' $ Example 1 0 0 Divide row 2 by 3 −2 3  0
1 −2
3
0 → 0 1 −7/3  0
3 −7  0 0
1 0
00
1
0
c1 − 2c2 + 3c3
7
c2 − c3 = 0
3
c3 = 0 So c1 = c2 = c3 = 0 is the unique solution.
& 55 % ...
View
Full
Document
This note was uploaded on 07/21/2011 for the course MAP 2203 taught by Professor Gallian during the Fall '09 term at FSU.
 Fall '09
 Gallian

Click to edit the document details