*This preview shows
page 1. Sign up
to
view the full content.*

**Unformatted text preview: **ve deﬁnite,
the ﬁrst term is larger or equal to 0
The second term is 0 if and only if ∆x = 0, i.e., xn = x
Thus e A is minimal if and only if xn = x as claimed The monotonicity property is a consequence of the inclusion
Kn ⊆ Kn+1 , and since Kn is a subset of IRm of dimension n as long
as convergence has not yet been achieved, convergence must be
achieved in at most m steps
That is, each step of conjugate direction cuts down the error term
component by component
5 / 25 Optimality of conjugate gradients (cont’d) The guarantee that the CG iteration converges in at most m steps is
void in ﬂoating point arithmetic
For arbitrary matrices A on a real computer, no decisive reduction in
en A will necessarily be observed at all when n = m
In practice, however, CG is used not for arbitrary matrices but for
matrices whose spectra are well behaved (partially due to
preconditioning) that convergence to a desired accuracy is achieved
for n
m
The theoretical exact convergence at n = m has no relevance to this
use of the CG iteration in scientiﬁc computing 6 / 25 Conjugate gradients as an optimization algorithm
The CG iteration has a certain optimality property: it minimizes
en A at step n over all vectors x ∈ Kn
A standard form for minimizing a nonlinear function of x ∈ IRm
At the heart of the iteration is the formula
xn = xn −1 + αn p n −1
A familiar equation in optimization, in which a current approximation
xn−1 is updated to a new approximation xn by movi...

View
Full
Document