{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

solprogram2

# solprogram2 - Comment on Program 2 Foundations of...

This preview shows pages 1–4. Sign up to view the full content.

Comment on Program 2 Foundations of Computational Math 1 Fall 2009 1 The Spectral Radii From homework we have that A = T α = α - 1 0 . . . . . . . . . 0 - 1 α - 1 0 . . . . . . 0 0 - 1 α - 1 0 . . . 0 . . . . . . . . . . . . . . . 0 . . . 0 - 1 α - 1 0 0 . . . . . . 0 - 1 α - 1 0 . . . . . . . . . 0 - 1 α λ j = α - 2 cos jθ, θ = π n + 1 q j = ( sin( ) , sin(2 ) , . . . , sin( njθ ) ) T and that G J = α - 1 T 0 . We therefore have ρ ( G J ) = | 2 cos θ n α | = 2 cos θ 1 α and since the matrix is tridiagonal ρ ( G gs ) = ρ 2 ( G J ) The number of expected steps for Jacobi given an initial error norm of e 0 and a desired error norm of e d is then d = (log 10 e d - log 10 e 0 ) log 10 ρ ( G J ) and half of that for Gauss-Seidel. We have the following spectral radii for the values of α and n of interest. 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
α n cos θ 1 ρ ( G J ) ρ ( G gs ) 2 100 0.99951628 0.9995163 0.9990328 2 1000 0.99999508 0.9999951 0.9999902 2 2000 0.99999877 0.9999988 0.9999975 2 10000 0.99999995 1.0000000 0.9999999 3 100 0.99951628 0.6663442 0.4440146 3 1000 0.99999508 0.6666634 0.4444401 3 2000 0.99999877 0.6666658 0.4444433 3 10000 0.99999995 0.6666666 0.4444444 4 100 0.99951628 0.4997581 0.2497582 4 1000 0.99999508 0.4999975 0.2499975 4 2000 0.99999877 0.4999994 0.2499994 4 10000 0.99999995 0.5000000 0.2500000 Note that for α = 2 the methods converge very slowly if at all in practice. Also note that α is more important than n in determining the radii. So we would not expect significant variation in the convergence as a function of n for a given α . For Symmetric Gauss-Seidel we have for a general symmetric positive definite matrix, A A = D - L - L T M = ( D - L ) D - 1 ( D - L T ) = A + LD - 1 L T G = I - M - 1 A = ( D - L T ) - 1 L ( D - L ) - 1 L T If A is symmetric positive definite then so is D and M . M therefore has a Cholesky factor- ization M = CC T , C = ( D - L ) D - 1 / 2 The iteration matrix G is not necessarily symmetric but it is similar to a symmetric positive definite matrix. It therefore has real positive eigenvalues and is positive definite in this more general sense. The eigenvalues of G can also be shown to satisfy a generalized symmetric definite eigenvalue problem. It can also be seen from this that ρ ( G ) < 1 for A symmetric positive definite. All of these facts can be derived as follows. Since M and A are symmetric positive definite, it is easily seen that LD - 1 L T is symmetric positive semidefinite. Since M = A + LD - 1 L T we have for any vector v v T Mv = v T Av + vLD - 1 L T v and therefore v T Mv v T Av v T Av v T Mv 1 . 2
For any eigenpair λ, u of G we have u T u = 1 ( I - M - 1 A ) u = λ = 1 - μ, M - 1 Au = We then have M - 1 Au = Au = Muμ u T Au = u T Muμ μ = v T Av v T Mv 1 0 λ < 1 The statement Au = Muμ is the definition of a generalized symmetric eigenvalue problem.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 7

solprogram2 - Comment on Program 2 Foundations of...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online