This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: G1BINM Introduction to Numerical Methods 7–1 7 Iterative methods for matrix equations 7.1 The need for iterative methods We have seen that Gaussian elimination provides a method for finding the exact solution (if rounding errors can be avoided) of a system of equations Ax = b . However Gaussian elimination requires approximately n 3 / 3 operations (where n is the size of the system), which may become prohibitively timeconsuming if n is very large. Another weakness is that Gaussian elimination requires us to store all the components of the matrix A . In many real applications (especially the numerical solution of differential equations), the matrix A is sparse , meaning that most of its elements are zero, in which case keeping track of the whole matrix is wasteful. In situations like these it may be preferable to adopt a method which produces an approximate rather than exact solution. We will describe three iterative methods , which start from an initial guess x and produce successively better approximations x 1 , x 2 , . .. . The iteration can be halted as soon as an adequate degree of accuracy is obtained, and the hope is that this takes a significantly shorter time than the exact method of Gaussian elimination would require. 7.2 Splitting the matrix All the methods we will consider involve splitting the matrix A into the difference between two new matrices S and T : A = S T . Thus the equation Ax = b gives Sx = Tx + b , based on which we can try the iteration Sx k +1 = Tx k + b . (7.1) Now if this procedure converges, say x k → x as k → ∞ , then clearly x solves the original problem Ax = b , but it is not at all clear from the outset whether a scheme like (7.1) converges or not. Evidently there are many possible ways to split the matrix A . The tests of a good choice are: 7–2 School of Mathematical Sciences University of Nottingham • the new vector x k +1 should be easy to compute , that is S should be easily invertible (for example S might be diagonal or triangular); • the scheme should converge as rapidly as possible towards the true solution. These two requirements are conflicting: a choice of splitting which is particularly easy to invert (see e.g. Jacobi’s method below) may not converge especially rapidly (or at all). At the other extreme we can converge exactly , in just one step, by using S = A , T = ; but S = A is usually difficult to invert: that’s the whole point of splitting! It is convenient to introduce the notation A = L + D + U (= S T ) , where L is strictly lower triangular, D is diagonal, U is strictly upper triangular. For example, if A = 1 2 3 4 5 6 7 8 9 , then L = 0 0 0 4 0 0 7 8 0 , D = 1 0 0 0 5 0 0 0 9 , U = 0 2 3 0 0 6 0 0 0 ....
View
Full
Document
This note was uploaded on 11/14/2011 for the course MATH 480 taught by Professor Sd during the Spring '11 term at Middle East Technical University.
 Spring '11
 sd
 Equations, Gaussian Elimination

Click to edit the document details