This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 04.08.1 Chapter 04.08 GaussSeidel Method After reading this chapter, you should be able to: 1. solve a set of equations using the GaussSeidel method, 2. recognize the advantages and pitfalls of the GaussSeidel method, and 3. determine under what conditions the GaussSeidel method always converges. Why do we need another method to solve a set of simultaneous linear equations? In certain cases, such as when a system of equations is large, iterative methods of solving equations are more advantageous. Elimination methods, such as Gaussian elimination, are prone to large roundoff errors for a large set of equations. Iterative methods, such as the GaussSeidel method, give the user control of the roundoff error. Also, if the physics of the problem are well known, initial guesses needed in iterative methods can be made more judiciously leading to faster convergence. What is the algorithm for the GaussSeidel method? Given a general set of n equations and n unknowns, we have 1 1 3 13 2 12 1 11 ... c x a x a x a x a n n 2 2 3 23 2 22 1 21 ... c x a x a x a x a n n . . . . . . n n nn n n n c x a x a x a x a ... 3 3 2 2 1 1 If the diagonal elements are nonzero, each equation is rewritten for the corresponding unknown, that is, the first equation is rewritten with 1 x on the left hand side, the second equation is rewritten with 2 x on the left hand side and so on as follows 04.08.2 Chapter 04.08 nn n n n n n n n n n n n n n n n n n n n n n a x a x a x a c x a x a x a x a x a c x a x a x a x a c x 1 1 , 2 2 1 1 1 , 1 , 1 2 2 , 1 2 2 , 1 1 1 , 1 1 1 22 2 3 23 1 21 2 2 These equations can be rewritten in a summation form as 11 1 1 1 1 1 a x a c x n j j j j 22 2 1 2 2 2 a x a c x j n j j j . . . 1 , 1 1 1 , 1 1 1 n n n n j j j j n n n a x a c x nn n n j j j nj n n a x a c x 1 Hence for any row i , . , , 2 , 1 , 1 n i a x a c x ii n i j j j ij i i Now to find i x ’s, one assumes an initial guess for the i x ’s and then uses the rewritten equations to calculate the new estimates. Remember, one always uses the most recent estimates to calculate the next estimates, i x . At the end of each iteration, one calculates the absolute relative approximate error for each i x as 100 new old new i i i i a x x x where new i x is the recently obtained value of i x , and old i x is the previous value of i x . GaussSeidel Method 04.08.3 When the absolute relative approximate error for each x i is less than the prespecified tolerance, the iterations are stopped....
View
Full Document
 Spring '08
 Kaw,A
 Ann, Diagonally dominant matrix, 18.874%, 31.889%, 67.662%

Click to edit the document details