Corollary of 2 since a polynomial p x n1 1 xj

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: of CG convergence First, we suppose that the eigenvalues are perfectly clustered but assume nothing about the locations of these clusters Theorem If A has only n distinct eigenvalues, then the CG iteration converges in at most n steps This is a corollary of (2), since a polynomial p (x) = n=1 (1 − x/λj ) ∈ Pn exists that is zero at any specified set of j n points {λj } At the other extreme, suppose we know nothing about any clustering of the eigenvalues but only that their distances from the origin vary by at most a factor κ ≥ 1 In other words, suppose we know only the 2-norm condition number κ = λmax /λmin , where λmax and λmin are the extreme eigenvalues of A 14 / 25 Rate of CG convergence (cont’d) Theorem Let the CG iteration be applied to a symmetric positive definite matrix problem Ax = b, where A has 2-norm condition number κ. Then the A-norm of the errors satisfy √ √ √ κ+1 n κ + 1 −n κ−1 n en A √ +√ ≤2 ≤2 √ e0 A κ−1 κ−1 κ+1 See text for proof using Chebyshev polynomials Since √ κ−1 2 √ ∼1− κ+1 κ as κ → ∞, it implies that if κ is large but not too large, convergence √ to a specified tolerance can be expected in O ( κ) iterations An upper bound, and convergence may be faster for special right hand sides or if the spectrum is clustered 15 / 25 Example: CG convergence Consider a 500 × 500 sparse matrix A where we have 1’s on the diagonal and a random number from the uniform distribution on [−1,...
View Full Document

This document was uploaded on 02/10/2014.

Ask a homework question - tutors are online