Errors in Gaussian Elimination
Many would mark the birth of modern numerical analysis as a branch of mathematics with the 1947 paper of von Neumann and Goldstine: Numerical Inverting of Matrices of High Order. The perspective, hinted at, if not explicitly
The Power Method
Assume that A Cnn has a n linearly independent eigenvectors v1 , v2 , . . . , vn . Then any x Cn can be represented uniquely as
n
x=
i=1
ci vi .
(1)
Here we are interested in what (if any) direction Ak x heads toward as k . Specically, we
Normal Equations
If b is not in the column space of A, then Ax = b has no solution; the system is inconsistent. This is typical if A is m n with m > n, which we will assume here. Let us also assume that A has full rank. Since Ax = b has no solution, one m
Norms of Matrices
We can measure matrix sizes using vector norms, because Rmn is a vector space. Although the names are dierent, all of the p-norms above give matrix norms if the matrix is stretched into one long vector. The only matrix norm of this avor
Least Squares with Gram-Schmidt
Recall the Gram-Schmidt QR factorization: A = QR, where
Q Rmn satises Qt Q = I and R Rnn is upper triangular. The cost is 1 2mn2 + O(mn) ops. If A is overwritten by Q, then only 2 n2 + O(n) words of memory are required. If
Gaussian Elimination as a Matrix Factorization
Each of the elementary row operations from Gaussian Elimination (GE) has associated with it a nonsingular matrix with the property that multiplying (on the left) by that associated matrix gives the same resul
Matrix Arithmetic
If you dont remember how to add matrices, you should look it up now. Here we are going to talk about matrix products. Let A Rmn and B Rnp . Lets also say that the matrix A is the coordinate representation of a linear transformation A : R
Sensitivity of Linear Least Squares
Assume that A Rmn has full column rank. We know that the problem min Ax b
x 2
(1)
has a unique solution, say xLS , which satises the (nonsingular) normal equations At Ax = At b. (2)
Here we will exploit the equivalence
Linear Least Squares Computations
Assuming that A Rmn has linearly independent columns, the problem arg min Ax b
x 2
(1)
has a unique solution, say xLS , which is also the unique solution to the normal equations At Ax = At b. This suggests the normal equa
Projections
With the inner product <x, y>, we have angles (<x, y>= x 2 y 2 cos (), and can speak of orthogonality: x y <x, y>= 0. Here we will consider the standard inner product for Rn : <x, y> xt y , but more general inner products can be very useful in
Two QR Factorizations
We compare two techniques for QR factorizations of a full-rank matrix A Rmn , with m n. While there are a few other methods available for use, we will talk here about the modied Gram-Schmidt process (MGS), and the Householder QR fact
C:\matlabR12\work\rk2.m December 10, 2010
Page 1 3:52:54 PM
function w = rk2(fofty,a,b,alpha,N) % function w = rk2(fofty,a,b,alpha,N) is a Matlab program to solve the IVP % by using modified euler method. % Inputs: % fofty is a name of m-file which evalua
C:\matlabR12\work\rf.m December 10, 2010
Page 1 3:50:10 PM
w w w w w
= = = = =
euler euler euler euler euler
('fofty',0,1,1/3,5); ('fofty',0,1,1/3,20); ('fofty',0,1,1/3,80); ('fofty',0,1,1/3,320); ('fofty',0,1,1/3,1280);
w w w w w
= = = = =
rk2 rk2 rk2 rk
C:\matlabR12\work\euler.m December 10, 2010
Page 1 3:51:31 PM
function w = euler (fofty,a,b,alpha,N) % function w = rk2(fofty,a,b,alpha,N) is a Matlab program to solve the IVP % by using euler method. % Inputs: % fofty is a name of m-file which evaluates
Norms of Vectors
When we want to measure the length of, or distance between, vectors we need a yardstick that measures in a consistent way, generalizing the idea of absolute value to vector spaces. We can capture the essence of length by requiring that su
Cancellation and Swamping
The IEEE standard 754 requires that the FAFA holds. That is: any arithmetic operation on two oats returns the oat nearest the true value. Here we discuss two principle ways information can be lost in this setting. Consider what h
The Singular Value Decomposition
Let A Rmn . Then there exist orthogonal matrices U Rmm , V Rnn , and a diagonal matrix of singular values = diag(1 , 2 , . . . , p ), where p = min (m, n) and 1 2 p 0, such that A = U V t . So what? Recall the two fundamen
QR Iterations
Consider the iteration Qi Ri Ai Ai+1 Ri Qi
Here we have rst computed the QR factorization of Ai , and then reversed their product to form Ai+1 . From Ai = Qi Ri we have Ri = Qt Ai , and substituting that i into Ai+1 = Ri Qi gives Ai+1 = Qt A
A = LDM t
If A is nonsingular and A = LU , then we can set D = diag(U ) and (since D is nonsinglar) M t D1 U is a unit upper triangular matrix and A = LDM t . There is no inherent benet to this factorization over LU , but it can give us a perspective from
The Inverse Power Method
Assume that A Cnn has a n linearly independent eigenvectors v1 , v2 , . . . , vn , and associated eigenvalues 1 , 2 , . . . , n , with |1 | > |2 | |i |, i = 3, 4, . . . , n. Then (1 , v1 ) is a dominant eigenpair of A, and for alm
The Action of Householder Reectors
Let A Rmn , m n. You may recall that the Householder QR factorization can be written as Hp H2 H1 A = R, where p = min (n, m 1) and Hk = I ( ut2uk )uk ut is a Householder reector which k k introduces zeros into positions
Sensitivity of Simple Eigenvalues
Let A Cnn . We would like to know how small perturbations in A change its eigenvalues. Of course k /aij measures just this sensitivity, but it isnt practical to compute these n3 quantities. Suppose Ax = x, y A = y , and x
Conditioning and Stability
A problem is well conditioned if a small change in the input creates a small change in the output (solution). A computation is backward stable if it produces the exact solution to a nearby problem. There is room for debate in th
Condition Numbers
A problem is well conditioned if a small change in the input always creates a small change in the output (solution). A problem is ill-conditioned if a small change in the input can create a large change in the solution (output). There is
Comparing
How do we compare two real numbers, say a and b? a b or b a |a b|
|ab| |a| |ab| |b | |ab| 100 |a| log10 ( |a|b| ) a
dierence absolute (or symmetric) dierence relative (to a) dierence relative (to b) dierence percent dierence (a) signicant digits
The Cholesky Factorization
Symmetric matrices are important because they are common in applications, have some very nice properties, and because the symmetry can be exploited by algorithms to save time and memory. For example, we know that if A = At has a
Classical GramSchmidt vs Modied GramSchmidt
Let A Rmn , with m n, and let A have n linearly independent columns a1 , a2 , . . . , an . There are many ways to implement the GramSchmidt process. Here are two very dierent implementations: Classical for k=1:n
Solving Square Triangular Systems
When you were taught Gaussian Elimination, you were probably taught to stop the elimination process when you achieved an upper triangular form, at which point you could solve the system by backward substitution. By upper
Floating Point Numbers
Most numbers cannot be represented in a computer. We are forced to use approximations which can be represented on the computer. Let the oating point approximation of x be called the oat of x and write it as (x). In our oating point