This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: CS205 – Class 9 Covered in class: All Reading: Shewchuk Paper on course web page 1. Conjugate Gradient Method – this covers more than just optimization, e.g. we’ll use it later as an iterative solver to aid in solving pde’s 2. Let’s go back to linear systems of equations Ax=b. a. Assume that A is square, symmetric, positive definite b. If A is dense we might use a direct solver, but for a sparse A, iterative solvers are better as they only deal with nonzero entries c. Quadratic Form 1 ( ) 2 T T f x x Ax b x c = + d. If A is symmetric, positive definite then f(x) is minimized by the solution x to Ax=b! i. 1 1 ( ) 2 2 T f x Ax A x b Ax b ∇ = + = since A is symmetric ii. ( ) f x ∇ = is equivalent to Ax=b 1. this makes sense considering the scalar equivalent 2 1 2 ( ) f x ax bx c = + where the line of symmetry is / x b a = which is the solution of ax=b and the location of the maximum or minimum iii. The Hessian is H=A, and since A is symmetric, positive definite so is H, and a solution to...
View
Full Document
 Fall '07
 Fedkiw
 Optimization, Inner product space, Quadratic form, Ark rk Ark, Ark rk rk

Click to edit the document details