Lecture 18
Iterative solution of linear systems*
Newton refinement
Conjugate gradient method
62
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Review of Part II
Methods and Formulas
Basic Matrix Theory:
Identity matrix:
AI
=
A
,
IA
=
A
, and
I
v
=
v
Inverse matrix:
AA
−
1
=
I
and
A
−
1
A
=
I
Norm of a matrix:

A
 ≡
max

v

=1

A
v

A matrix may be singular or nonsingular. See Lecture 10.
Solving Process:
Gaussian Elimination produces LU decomposition
Row Pivoting
Back Substitution
Condition number:
cond(
A
)
≡
max

δ
x

/

x


δA


A

+

δ
b


b

= max
parenleftbigg
Relative error of output
Relative error of inputs
parenrightbigg
.
A big condition number is bad; in engineering it usually results from poor design.
LU factorization:
PA
=
LU.
Solving steps:
Multiply by P:
d
=
P
b
Forwardsolve:
L
y
=
d
Backsolve:
U
x
=
y
Eigenvalues and eigenvectors:
A nonzero vector
v
is an eigenvector (
ev
) and a number
λ
is its eigenvalue (
ew
) if
A
v
=
λ
v
.
Characteristic equation: det(
A
−
λI
) = 0
Equation of the eigenvector: (
A
−
λI
)
v
=
0
Complex ew’s:
Occur in conjugate pairs:
λ
1
,
2
=
α
±
iβ
and
ev
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '08
 Young,T
 Linear Algebra, Cholesky Decomposition, Linear Systems, Matrices, Formulas, Invertible matrix, Orthogonal matrix

Click to edit the document details