lin-cg-notes - Notes on the linear conjugate gradient...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Notes on the linear conjugate gradient method 1 The conjugacy property Minimizing a convex quadratic function f ( x ) = 1 2 x T Ax- b T x + c is equivalent to solving the linear system ∇ f ( x ) = Ax- b = . For positive definite A this can be done using a method called the conjugate gradient method, that has close ties to optimization. The basic idea is to write x k = x + α p + α 1 p 1 + ··· + α k- 1 p k- 1 where the α i ’s are scalars, and the p i ’s are search directions. Then f ( x k ) is a quadratic function of the α i ’s, and finding the minimizing α i ’s can be done by solving a linear system. But linear systems can be most easily solved when they are diagonal . The linear system for the minimizing α i ’s is p T i " A ( x + k- 1 ∑ j = α j p j )- b # = for i = , 1 , 2 , ..., k- 1 . The linear system is k × k with ( i , j ) entry given by p T i Ap j . This matrix is diagonal if p T i Ap j = for all i 6 = j . That is, if the p i ’s are A-conjugate or just conjugate if A is understood. If the p i ’s are conjugate (with respect to A ) then the system of linear equations becomes simply p T i Ap i α i = p T i ( b- Ax ) = p T i " b- A ( x + i- 1 ∑ j = α j p j ) # . Note that increasing k does not change the value of α i for i ≤ k . This means that x k + 1 = x k + α k p k . Note that x k minimizes f ( x + ∑ k- 1 i = α i p i ) over all α i ’s; that is, x k minimizes f ( z ) over all z ∈ x o + span { p , ..., p k- 1 } . This is the conjugate gradient minimization property. Let r k = ∇ f ( x k )= Ax k- b . In optimization, this is clearly the gradient; in linear algebra it is called the residual for x k . If we had a sequence p , p 1 , ... of conjugate vectors then we could design an iterative algorithm for minimizing f ( x ) : 1 Given: A , b , x , p , p 1 , p 2 ,... for k ← , 1 , 2 , ..., n r k ← Ax k- b α k ← - p T k r k / p T k Ap k x k + 1 ← x k + α k p k end for The problem now is to find out how to generate the conjugate p i ’s. At the beginning, any p by itself is conjugate. So we have a place to start. We can then proceed using mathematical induction. Now let’s suppose we have generated p , p 1 , ..., p k which are, so far, all conjugate. We will also show that the residualswhich are, so far, all conjugate....
View Full Document

This note was uploaded on 04/01/2012 for the course 22M 174 taught by Professor Davidstewart during the Spring '12 term at University of Iowa.

Page1 / 5

lin-cg-notes - Notes on the linear conjugate gradient...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online