solprogram3

solprogram3 - Comment on Program 3 Foundations of...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Comment on Program 3 Foundations of Computational Math 1 Fall 2011 1 The Spectral Radii 1.1 Jacobi and Gauss-Seidel Methods From homework we have that A = T = 1 . . . . . . . . . 1 1 . . . . . . 1 1 . . . . . . . . . . . . . . . . . . . . . 1 1 . . . . . . 1 1 . . . . . . . . . 1 j = 2 cos j, = n + 1 q j = ( sin( j ) , sin(2 j ) , . . ., sin( nj ) ) T and that G J = - 1 T . We therefore have ( G J ) = | 2 cos n | = 2 cos 1 where k = k . Since the matrix is tridiagonal ( G gs ) = 2 ( G J ) The number of expected steps for Jacobi given an initial error vector e (0) with norm = bardbl e (0) bardbl and a desired error of d = bardbl e (0) bardbl follows d d ( G J ) d (log 10 e d log 10 e ) log 10 ( G J ) and half of that for Gauss-Seidel. We have the following spectral radii for the values of and n of interest. 1 n cos 1 ( G J ) ( G gs ) 2 100 0.99951628 0.9995163 0.9990328 2 1000 0.99999508 0.9999951 0.9999902 2 2000 0.99999877 0.9999988 0.9999975 2 10000 0.99999995 1.0000000 0.9999999 3 100 0.99951628 0.6663442 0.4440146 3 1000 0.99999508 0.6666634 0.4444401 3 2000 0.99999877 0.6666658 0.4444433 3 10000 0.99999995 0.6666666 0.4444444 4 100 0.99951628 0.4997581 0.2497582 4 1000 0.99999508 0.4999975 0.2499975 4 2000 0.99999877 0.4999994 0.2499994 4 10000 0.99999995 0.5000000 0.2500000 Note that for = 2 the methods are expected to converge very slowly, if at all, in practice. Also note that is more important than n in determining the radii. So we would not expect significant variation in the convergence as a function of n for a given . 1.2 Symmetric Gauss-Seidel Method In this section we prove the the basic statements in the notes about Symmetric Gauss-Seidel applied to a symmetric positive definite matrix. For Symmetric Gauss-Seidel, if A = A T and D negationslash = 0 we have A = D L L T M = ( D L ) D- 1 ( D L T ) = A + LD- 1 L T G = I M- 1 A = ( D L T )- 1 L ( D L )- 1 L T Note that Le 1 = 0 and Me 1 = Ae 1 . If we assume further that A is symmetric positive definite, i.e., v negationslash = 0 v T Av > 0 then it is easy to see that the diagonal elements of A must be positive, i.e., D is symmetric positive definite and diagonal. It also follows that D- 1 exists, is symmetric positive definite, and diagonal. M is also symmetric positive definite and has a Cholesky factorization M = CC T , C = ( D L ) D- 1 / 2 . The form of C follows simply from M = ( D L ) D- 1 ( D L T ). It is easy to see that C is lower triangular with positive diagonal elements. Therefore, M = CC T is the Cholesky factorization of M which is therefore symmetric positive definite....
View Full Document

This note was uploaded on 11/27/2011 for the course MAD 5403 taught by Professor Gallivan during the Spring '09 term at FSU.

Page1 / 8

solprogram3 - Comment on Program 3 Foundations of...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online