This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Lecture 18 Linear systems and the Fundamental Matrix As was the case for linear second order DEs, homogeneous linear systems will play an important role in the theory of linear systems. As such, we return to the homogeneous linear system x = A ( t ) x (1) where A ( t ) is an n n matrix of coefficients a ij ( t ) which are assumed to be continuous functions of t over an interval I . This includes the case that some or all of the a ij are constant, i.e., time independent. A solution x ( t ) is an ntuple of C 1 functions x i : I R . We shall represent such a solution as a column vector: x ( t ) = x 1 ( t ) x 2 ( t ) x n ( t ) . (2) The solution x may be considered as a C 1 vectorvalued function x : I R n . We shall denote the space of such functions as C 1 ( I, R n ). The solution space of (1), which we shall denote as S , is a subspace of C 1 ( I, R n . As mentioned in the Course Notes, it is the kernel of the linear operator L : C 1 ( I, R n ) C 1 ( I, R n ) defined as follows: L x = x A ( t ) x , x C 1 ( I, R n ) . (3) Earlier in this course, we studied linear second order DEs having the standard form y + P ( x ) y + Q ( x ) y = 0 . (4) If we change the name of the independent variable from x to t and define x 1 ( t ) = y ( t ) , x 2 ( t ) = y ( t ) , (5) 1 then the DE in (4) can be expressed as a linear homogeneous system of the form (1), with n = 2 and the matrix A ( t ) given by A ( t ) = 1 Q ( t ) P ( t ) . (6) As well see below, the results we derived for linear second order homogeneous DEs will carry over to the general ndimensional case. In an analogous way, a general nth order linear homogeneous DE in y ( x ), written in standard form as y ( n ) + a n 1 ( x ) y ( n 1) + a 1 ( x ) y + a ( x ) y = 0 (7) may be expressed as a linear system of n first order DEs, with the matrix A ( t ) in (1) appro priately defined. We now show that the solution space S of Eq. (1) is a linear vector space. Firstly, x = , the trivial solution of (1), is an element of this space. Secondly, the space is closed under addition and scalar multiplication by the following Principle of Superposition : If x 1 ( t ) and x 2 ( t ) are solutions of (1), i.e. elements of S , then so is the following linear combination, x ( t ) = c 1 x 1 ( t ) + c 2 x 2 ( t ) , for any c 1 , c 2 R . (8) The proof is easy: x ( t ) = ( c 1 x 1 ( t ) + c 2 x 2 ( t )) = c 1 x 1 ( t ) + c 2 x 2 ( t ) = c 1 Ax 1 ( t ) + c 2 Ax 2 ( t ) = A ( c 1 x 1 ( t ) + c 2 x 2 ( t )) = Ax ( t ) Note that the superposition property is made possible by the linearity of both the differential operator as well as of matrix multiplication....
View
Full
Document
This note was uploaded on 09/20/2010 for the course AMATH 351 taught by Professor Sivabalsivaloganathan during the Spring '08 term at Waterloo.
 Spring '08
 SivabalSivaloganathan

Click to edit the document details