This preview shows pages 1–2. Sign up to view the full content.
CS205 – Class 10
Covered in class:
All
Reading:
Shewchuk Paper on course web page
1.
Conjugate Gradient Method
– this covers more than just optimization, e.g. we’ll use it later
as an iterative solver to aid in solving pde’s
2.
Let’s go back to linear systems of equations Ax=b.
a.
Assume that A is square, symmetric, positive definite
b.
If A is dense we might use a direct solver, but for a sparse A, iterative solvers are better
as they only deal with nonzero entries
c.
Quadratic Form
1
()
2
TT
f
xx
A
x
b
x
c
=−
+
d.
If A is symmetric, positive definite then f(x) is minimized by the solution x to Ax=b!
i.
11
22
T
f
xA
x
A
x
b
A
x
b
∇=
+
−
=
−
since A is symmetric
ii.
() 0
fx
is equivalent to Ax=b
1.
this makes sense considering the scalar equivalent
2
1
2
f
xa
x
b
x
c
+
where
the line of symmetry is
/
x ba
=
which is the solution of ax=b and the
location of the maximum or minimum
iii.
The Hessian is H=A, and since A is symmetric, positive definite so is H, and a
solution to
, or Ax=b is a minimum
1.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview. Sign up
to
access the rest of the document.
This note was uploaded on 01/29/2008 for the course CS 205A taught by Professor Fedkiw during the Fall '07 term at Stanford.
 Fall '07
 Fedkiw

Click to edit the document details