Inner Products and Norms Notes

Inner Products and Norms Notes - AIMSLectureNotes 2006...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: AIMSLectureNotes 2006 Peter J. Olver 7. Iterative Methods for Linear Systems Linear iteration coincides with multiplication by successive powers of a matrix; con- vergence of the iterates depends on the magnitude of its eigenvalues. We discuss in some detail a variety of convergence criteria based on the spectral radius, on matrix norms, and on eigenvalue estimates provided by the Gerschgorin Circle Theorem. We will then turn our attention to the three most important iterative schemes used to accurately approximate the solutions to linear algebraic systems. The classical Jacobi method is the simplest, while an evident serialization leads to the popular GaussSeidel method. Completely general convergence criteria are hard to formulate, although con- vergence is assured for the important class of diagonally dominant matrices that arise in many applications. A simple modification of the GaussSeidel scheme, known as Succes- sive Over-Relaxation (SOR), can dramatically speed up the convergence rate, and is the method of choice in many modern applications. Finally, we introduce the method of conju- gate gradients, a powerful semi-direct iterative scheme that, in contrast to the classical iterative schemes, is guaranteed to eventually produce the exact solution. 7.1. Linear Iterative Systems. We begin with the basic definition of an iterative system of linear equations. Definition 7.1. A linear iterative system takes the form u ( k +1) = T u ( k ) , u (0) = a . (7 . 1) The coefficient matrix T has size n n . We will consider both real and complex sys- tems, and so the iterates u ( k ) are vectors either in R n (which assumes that the coefficient matrix T is also real) or in C n . For k = 1 , 2 , 3 ,... , the solution u ( k ) is uniquely determined by the initial conditions u (0) = a . Powers of Matrices The solution to the general linear iterative system (7.1) is, at least at first glance, immediate. Clearly, u (1) = T u (0) = T a , u (2) = T u (1) = T 2 a , u (3) = T u (2) = T 3 a , Warning : The superscripts on u ( k ) refer to the iterate number, and should not be mistaken for derivatives. 10/18/06 103 c circlecopyrt 2006 Peter J. Olver and, in general, u ( k ) = T k a . (7 . 2) Thus, the iterates are simply determined by multiplying the initial vector a by the succes- sive powers of the coefficient matrix T . And so, unlike differential equations, proving the existence and uniqueness of solutions to an iterative system is completely trivial. However, unlike real or complex scalars, the general formulae and qualitative behavior of the powers of a square matrix are not nearly so immediately apparent. (Before con- tinuing, the reader is urged to experiment with simple 2 2 matrices, trying to detect patterns.) To make progress, recall how we managed to solve linear systems of differential equations by suitably adapting the known exponential solution from the scalar version....
View Full Document

This note was uploaded on 02/10/2012 for the course MATH 5485 taught by Professor Olver during the Fall '09 term at University of Central Florida.

Page1 / 28

Inner Products and Norms Notes - AIMSLectureNotes 2006...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online