This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Lecture 18 Iterative solution of linear systems* Newton refinement Conjugate gradient method 62 Review of Part II Methods and Formulas Basic Matrix Theory: Identity matrix: AI = A , IA = A , and I v = v Inverse matrix: AA 1 = I and A 1 A = I Norm of a matrix:  A  max  v  =1  A v  A matrix may be singular or nonsingular. See Lecture 10. Solving Process: Gaussian Elimination produces LU decomposition Row Pivoting Back Substitution Condition number: cond( A ) max  x  /  x   A   A  +  b   b  = max parenleftbigg Relative error of output Relative error of inputs parenrightbigg . A big condition number is bad; in engineering it usually results from poor design. LU factorization: PA = LU. Solving steps: Multiply by P: d = P b Forwardsolve: L y = d Backsolve: U x = y Eigenvalues and eigenvectors: A nonzero vector v is an eigenvector ( ev ) and a number is its eigenvalue ( ew ) if A v = v . Characteristic equation: det( A I ) = 0 Equation of the eigenvector: ( A I ) v = Complex ews: Occur in conjugate pairs: 1 , 2 = i and ev s must also come in conjugate pairs: w = u i v . Vibrational modes: Eigenvalues are frequencies squared. Eigenvectors are modes. 63 64 REVIEW OF PART II Power Method: Repeatedly multiply x by A and divide by the element with the largest absolute value....
View
Full
Document
This note was uploaded on 02/09/2012 for the course MATH 344 taught by Professor Young,t during the Fall '08 term at Ohio University Athens.
 Fall '08
 Young,T
 Cholesky Decomposition, Linear Systems, Formulas

Click to edit the document details