Numerical Linear Algebra/Computational Mathematics I
MATH 477

Fall 2006
1
Fundamentals
1.0
Preliminaries
The rst question we want to answer is: What is computational mathematics?
One possible denition is: The study of algorithms for the solution of computational problems in science and engineering.
Other names for roughly the
Numerical Linear Algebra/Computational Mathematics I
MATH 477

Fall 2006
14
14.1
Arnoldi Iteration and GMRES
Arnoldi Iteration
The classical iterative solvers we have discussed up to this point were of the form
x(k) = Gx(k1) + c
with constant G and c. Such methods are also known as stationary methods. We will
now study a diere
Numerical Linear Algebra/Computational Mathematics I
MATH 477

Fall 2006
13
13.1
Classical Iterative Methods for the Solution of Linear
Systems
Why Iterative Methods?
Virtually all methods for solving Ax = b or Ax = x require O(m3 ) operations. In
practical applications A often has a certain structure and/or is sparse, i.e., A
Numerical Linear Algebra/Computational Mathematics I
MATH 477

Fall 2006
12
How to Compute the SVD
We saw earlier that the nonzero singular values of A are given by the square roots of
the nonzero eigenvalues of either A A or AA . However, computing the singular values
in this way is usually not stable (cf. solution of the nor
Numerical Linear Algebra/Computational Mathematics I
MATH 477

Fall 2006
11
The QR Algorithm
11.1
QR Algorithm without Shifts
In the previous chapter (in the Maple worksheet 473 Hessenberg.mws) we investigated
two dierent attempts to tackling the eigenvalue problem. In the rst attempt (which
we discarded) the matrix A was mult
Numerical Linear Algebra/Computational Mathematics I
MATH 477

Fall 2006
10
The Rayleigh Quotient and Inverse Iteration
From now on we will restrict the discussion to real symmetric matrices A Rmm whose
eigenvalues 1 , . . . , m are guaranteed to be real and whose eigenvectors q 1 , . . . , q m are
orthogonal. Moreover, in thi
Numerical Linear Algebra/Computational Mathematics I
MATH 477

Fall 2006
9
Overview of Eigenvalue Algorithms
As already mentioned earlier, any eigenvalue algorithm needs to be an iterative one.
Some not so good approaches are:
1. Compute the roots of the characteristic polynomial. This is usually a very illconditioned problem,
Numerical Linear Algebra/Computational Mathematics I
MATH 477

Fall 2006
8
Eigenvalue Problems
8.1
Motivation and Denition
Matrices can be used to represent linear transformations. Their eects can be: rotation,
reection, translation, scaling, permutation, etc., and combinations thereof. These
transformations can be rather comp
Numerical Linear Algebra/Computational Mathematics I
MATH 477

Fall 2006
7
Gaussian Elimination and LU Factorization
In this nal section on matrix factorization methods for solving Ax = b we want to
take a closer look at Gaussian elimination (probably the best known method for solving
systems of linear equations).
The basic id
Numerical Linear Algebra/Computational Mathematics I
MATH 477

Fall 2006
6
Conditioning and Stability
A computing problem is wellposed if
1. a solution exists (e.g., we want to rule out situations that lead to division by
zero),
2. the computed solution is unique,
3. the solution depends continuously on the data, i.e., a smal
Numerical Linear Algebra/Computational Mathematics I
MATH 477

Fall 2006
5
Least Squares Problems
Consider the solution of Ax = b, where A Cmn with m > n. In general, this system
is overdetermined and no exact solution is possible.
Example Fit a straight line to 10 measurements. If we represent the line by f (x) =
mx + c and t
Numerical Linear Algebra/Computational Mathematics I
MATH 477

Fall 2006
4
4.1
QR Factorization
Reduced vs. Full QR
Consider A Cmn with m n. The reduced QR factorization of A is of the form
A = QR,
where Q Cmn with orthonormal columns and R Cnn an upper triangular matrix
(j, j ) = 0, j = 1, . . . , n.
such that R
As with the
Numerical Linear Algebra/Computational Mathematics I
MATH 477

Fall 2006
3
Projectors
If P Cmm is a square matrix such that P 2 = P then P is called a projector. A
matrix satisfying this property is also known as an idempotent matrix.
Remark It should be emphasized that P need not be an orthogonal projection matrix.
Moreover,
Numerical Linear Algebra/Computational Mathematics I
MATH 477

Fall 2006
2
Singular Value Decomposition
The singular value decomposition (SVD) allows us to transform a matrix A Cmn to
diagonal form using unitary matrices, i.e.,
A = U V .
(4)
Here U Cmn has orthonormal columns, Cnn is diagonal, and V Cnn is
unitary. This is the
Numerical Linear Algebra/Computational Mathematics I
MATH 477

Fall 2006
15
Conjugate Gradients
This method for symmetric positive denite matrices is considered to be the original
Krylov subspace method. It was proposed by Hestenes and Stiefel in 1952, and is
motivated by the following theorem.
Theorem 15.1 If A is symmetric p