This preview shows pages 1–2. Sign up to view the full content.
CS205 – Class 10
Covered in class:
All
Reading:
Shewchuk Paper on course web page
1.
Conjugate Gradient Method
– this covers more than just optimization, e.g. we’ll use it later as an iterative
solver to aid in solving pde’s
2. Let’s go back to linear systems of equations Ax=b.
a. Assume that A is square, symmetric, positive definite
b. If A is dense we might use a direct solver, but for a sparse A, iterative solvers are better as they only
deal with nonzero entries
c. Quadratic Form
1
( )
2
T
T
f x
x Ax b x c
d. If A is symmetric, positive definite then f(x) is minimized by the solution x to Ax=b!
i.
1
1
( )
2
2
T
f x
Ax
A x b
Ax b
since A is symmetric
ii.
( )
0
f x
is equivalent to Ax=b
1. this makes sense considering the scalar equivalent
2
1
2
( )
f x
ax
bx c
where the line
of symmetry is
/
x
b a
which is the solution of ax=b and the location of the maximum
or minimum
iii. The Hessian is H=A, and since A is symmetric, positive definite so is H, and a solution to
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview. Sign up
to
access the rest of the document.
This document was uploaded on 05/25/2011.
 Spring '07

Click to edit the document details