Succession of such steps the cg iteration attempts to

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ng a distance αn (the step length) in the direction pn−1 (the search direction) By a succession of such steps, the CG iteration attempts to find a minimum of a nonlinear equation Which function to minimize? 7 / 25 Conjugate gradients as an optimization algorithm (cont’d) Cannot use e knowing x∗ A or e 2 A as neither can be evaluated without On the other hand, given A and b and x ∈ IRm , the quantity 1 φ(x) = x Ax − x b 2 can certainly be evaluated as en Like e 2, A 2 A = = = = en Aen = (x∗ − xn ) A(x∗ − xn ) xn Axn − 2xn Ax∗ + x∗ Ax∗ xn Axn − 2xn b + x∗ b 2φ(xn ) + constant it must achieve its minimum uniquely at x = x∗ 8 / 25 Conjugate gradients as an optimization algorithm (cont’d) The CG iteration can be interpreted as an iterative process for minimizing the quadratic function φ(x) of x ∈ IRm At each step, an iterate xn = xn−1 + αn pn−1 is computed that minimizes φ(x) over all x in the one dimensional space xn−1 + pn−1 It can be readily confirmed that the formula αn = rn −1 rn −1 pn−1 Apn−1 ensures αn is optimal in the sense among all step lengths α What makes the CG iteration remarkable is the choice of the search direction pn−1 , which has the special property that minimizing φ(x) over xn−1 + pn−1 actually minimizes it over all of Kn 9 / 25 Analogy between CG iteration and Lanczos iteration A close analogy between CG iteration for solving Ax = b and the Lanczos iteration for finding eigenvalu...
View Full Document

Ask a homework question - tutors are online