{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

lecture18 - Lecture 18 Iterative solution of linear systems...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Lecture 18 Iterative solution of linear systems* Newton refinement Conjugate gradient method 62
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Review of Part II Methods and Formulas Basic Matrix Theory: Identity matrix: AI = A , IA = A , and I v = v Inverse matrix: AA 1 = I and A 1 A = I Norm of a matrix: | A | ≡ max | v | =1 | A v | A matrix may be singular or nonsingular. See Lecture 10. Solving Process: Gaussian Elimination produces LU decomposition Row Pivoting Back Substitution Condition number: cond( A ) max | δ x | / | x | | δA | | A | + | δ b | | b | = max parenleftbigg Relative error of output Relative error of inputs parenrightbigg . A big condition number is bad; in engineering it usually results from poor design. LU factorization: PA = LU. Solving steps: Multiply by P: d = P b Forwardsolve: L y = d Backsolve: U x = y Eigenvalues and eigenvectors: A nonzero vector v is an eigenvector ( ev ) and a number λ is its eigenvalue ( ew ) if A v = λ v . Characteristic equation: det( A λI ) = 0 Equation of the eigenvector: ( A λI ) v = 0 Complex ew’s: Occur in conjugate pairs: λ 1 , 2 = α ± and ev
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}