Unformatted text preview: GaussSeidel method ( Philipp Ludwig von Seidel [zah'edl] (23 October 1821, Zweibrcken, Germany 13 August 1896, Munich) Fixed point iteration for solving linear equation, where the terms in the diagonal are left side and the rest are moved to the right side. Take Problem 12.2 in textbook 10 x1 + 2 x2  x3 = 27 3x1  6 x2 + 2 x3 = 61.5 x1 + x2 + 5 x3 = 21.5 Can be written also in matrix form 10 2 1 x1 27 3 6 2 x = 61.5 2 x 21.5 1 1 5 3 As a first step we divide each equation by the diagonal term x1 + 0.2 x2  0.1x3 = 2.7 1 x3 = 10.25 3 0.2 x1 + 0.2 x2 + x3 = 4.3 0.5 x1 + x2  Then we move all the diagonal terms to the right hand side x1 = 2.7  ( 0.2 x2  0.1x3 ) 1 x2 = 10.25  0.5 x1  x3 3 x3 = 4.3  ( 0.2 x1 + 0.2 x2 ) Or, in matrix form x1 2.7 0 0.2 0.1 x1 x2 = 10.25  0.5 0 1/ 3 x2 x 0 x3 3 new 4.3 0.2 0.2 On Matlab a=[10 2 1; 3 6 2; 1 1 5] b=[27 61.5 21.5]' b = 27.0000 61.5000 21.5000 >> bmd=inv(diag)*b bmd = 2.7000 10.2500 4.3000 a = 10 2 1 3 6 2 1 1 5 >> diag=[10 0 0; 0 6 0; 0 0 5] diag = 10 0 0 0 6 0 0 0 5 >> am=adiag am = 0 2 1 3 0 2 1 1 0 amd = 0 0.2000 0.1000 0.5000 0 0.3333 0.2000 0.2000 0 If we update each component of x after we recalculate it, we get the Gauss Seidel method, described in the book. If we update it only after we update all three, we get the Older Jacobi method Carl Gustav Jacob Jacobi (December 10, 1804 February 18, 1851). It converges more slowly, but it is more conducive to matrix implementation, so more efficient on Matlab and parallel computers. Let us start the solution by a guess that all components are zero. Then the first iteration will give us x1 = 2.7 x2 = 10.25 x3 = 4.3 And the next GaussSeidel iteration will give us x1 = 2.7  0.2 10.25  0.1 4.3 = 0.22 x2 = 10.25  0.5 0.22  4.3 / 3 = 8.707 x3 = 4.3  0.2 0.22  0.22 8.707 = 6.805 Jacobi's method implemented in Matlab goes as follows >> x0=zeros(3,1) x0 = 0 0 0 >> x0=bmdamd*x0 x0 = 2.7000 10.2500 4.3000 >> x0=bmdamd*x0 x0 = 0.2200 7.4667 6.8900 >> x0=bmdamd*x0 x0 = 0.5177 7.8433 5.8373 >> x0=bmdamd*x0 x0 = 0.5476 8.0454 5.9722 >> x0=bmdamd*x0 x0 = 0.4937 7.9855 6.0186 >> x0=bmdamd*x0 x0 = 0.5010 7.9969 5.9958 >> x0=bmdamd*x0 x0 = 0.5010 8.0009 5.9996 >> x0=bmdamd*x0 x0 = 0.4999 7.9996 6.0004 The method converges when the matrix on the right hand side has eigenvalues of less than one in magnitude. It converges faster as these eigenvalues get smaller e=eig(amd) e = 0.3013 0.1506 + 0.2340i 0.1506 0.2340i >> maxe=max(e) maxe = 0.3013 >> maxe=max(abs(e)) maxe = 0.3013 This property is guaranteed when the original system has diagonal dominance. This means that the diagnonal number is larger in magnitude than the sum of the magnitudes of the off diagonal terms. This is satisfied for this problem. Convergence can be sometime accelerated by the relaxation method x new = x new + (1  ) x old For example, with a weighting factor of 0.9, the iteration will now go x0 = 0 0 0 >> x0=0.1*x0+0.9*(bmdamd*x0) x0 = 2.4300 9.2250 3.8700 >> x0=0.1*x0+0.9*(bmdamd*x0) x0 = 0.6642 7.8930 6.3549 >> x0=0.1*x0+0.9*(bmdamd*x0) x0 = 0.5037 7.8089 6.0458 >> x0=0.1*x0+0.9*(bmdamd*x0) x0 = 0.5306 7.9655 5.9709 >> x0=0.1*x0+0.9*(bmdamd*x0) x0 = 0.5119 7.9915 5.9964 >> x0=0.1*x0+0.9*(bmdamd*x0) x0 = 0.5030 7.9949 6.0003 >> x0=0.1*x0+0.9*(bmdamd*x0) x0 = 0.5012 7.9980 5.9997 For nonlinear equations we can linearize them and iterate. That is, if we have a vector of nonlinear equations F(x)=0, we can use Taylor series expansion to write it as F(x)=F(x0)+F'(x)(xx0) Only now, F' is a matrix called the Jacobian. Then we solve F(x0)+F'(x)(xx0)=0 for x. x = x0  F '1 F For example, Problem 12.8 is written in term of two unknowns, x and y. I will rewrite it as x1 and x2 2 F1 = x12 + x2  5 = 0 F2 = x12  x2  1 = 0 Then F '( x) = 2 x1 2 x2 2 x1 1 We can now implement the iteration in Matlab >> f=@(x) [x(1)^2+x(2)^25; x(1)^2x(2)1] f = @(x) [x(1)^2+x(2)^25; x(1)^2x(2)1] >> jac=@(x) [2*x(1), 2*x(2); 2*x(1), 1] jac = @(x) [2*x(1), 2*x(2); 2*x(1), 1] Starting with x1=1, x2=1 >> x0=[1 1]' x0 = 1 1 >> f(x0) ans = 3 1 >> jac(x0) ans = 2 2 2 1 >> x0=x0jac(x0)\f(x0) x0 = 1.8333 1.6667 >> x0=x0jac(x0)\f(x0) x0 = 1.6160 1.5641 >> x0=x0jac(x0)\f(x0) x0 = 1.6006 1.5616 >> x0=x0jac(x0)\f(x0) x0 = 1.6005 1.5616 The iteration converged and to check >> f(x0) ans = 0 0 ...
View
Full
Document
This note was uploaded on 03/27/2012 for the course EGM 3344 taught by Professor Raphaelhaftka during the Spring '09 term at University of Florida.
 Spring '09
 RAPHAELHAFTKA

Click to edit the document details