{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lecture2A - 1 2 Linear Systems In these chapter 2A notes...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
1 2 Linear Systems In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation. 2.1 Matrix ODEs Let and is a scalar. A linear function satisfies Linear superposition ) Linear scaling: Example: is not a linear function, instead it is affine . General form of a linear function: In matrix notation: Assume is a real matrix and General form for a linear ODE Example: (Linearization about an equilibrium point) Let . Recall Taylor series expansion about . Linear approximation at corresponds to throwing away the higher order terms. Now suppose . The same equation applies if the derivative is interpreted as a matrix . Now suppose has an equilibrium point . Expand the right hand side of the ODE in Taylor series about and use to obtain: Finally substitute , let and make a linear approximation by throwing away the higher order terms to get . This process of linearization about an equilibrium point is an important technique in the analysis of ODEs.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
2 Eigenvalues and Eigenvectors Let A be a real matrix, and . An eigenvector v of A is a nonzero solution of . is the corresponding eigenvalue . From linear algebra we know that the eigenvalues of a matrix A are given by the roots of the characteristic polynomial . Look for solutions of [1] of form . Substitution into [1] leads to . Since is nonzero this implies with solution for an arbitrary constant . The corresponding solution of [1] is . Example: Consider the matrix Eigenvalues are solutions of . They are and . The first eigenvector satisfies or . Letting , we have This has the solution . Eigenvectors are determined up to an arbitrary multiplicative factor. Similarly, find . Two solutions of [1] for this matrix A are and . Sketch examples of and in the ( plane. Meiss calls these “straight line” solutions.
Background image of page 2
3 Diagonalization Since [1] is linear, linear combinations of solutions are themselves solutions Ex: If has linearly independent eigenvectors, , then these span In this case we also say that has a complete set of eigenvectors. The matrix is nonsingular. [Vertical bars separate column vectors.] Then is the matrix with eigenvalues on the main diagonal and zeros in all of the other slots. Multiplying by on the left to obtain . In this case we say is diagonalizable or semisimple . Example (continued). The inverse of any matrix is where . Then use from the example above to obtain and Then we find . If has linearly independent eigenvectors, then any can be written as a linear combination of these , where can be regarded as the coordinates of the point in eigenvector coordinates. The matrix transforms from eigenvector coordinates to standard coordinates. Correspondingly, the matrix transforms back from standard coordinates to eigenvector coordinates. With this in mind, consider [1]. Multiply on the left by to get:
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
4 [2] where are the coordinates of the point in the basis of eigenvectors. Since is a diagonal matrix, the equations [2] are not coupled. The i^th component is just , with solution for some constant . In matrix notation this solution is written , where .
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}