Errors in Gaussian Elimination
Many would mark the birth of modern numerical analysis as a branch of mathematics with the 1947 paper of von Neumann and Goldstine: Numerical Inverting of Matrices of Hi
The Power Method
Assume that A Cnn has a n linearly independent eigenvectors v1 , v2 , . . . , vn . Then any x Cn can be represented uniquely as
n
x=
i=1
ci vi .
(1)
Here we are interested in what (if
Normal Equations
If b is not in the column space of A, then Ax = b has no solution; the system is inconsistent. This is typical if A is m n with m > n, which we will assume here. Let us also assume th
Norms of Matrices
We can measure matrix sizes using vector norms, because Rmn is a vector space. Although the names are dierent, all of the p-norms above give matrix norms if the matrix is stretched i
Least Squares with Gram-Schmidt
Recall the Gram-Schmidt QR factorization: A = QR, where
Q Rmn satises Qt Q = I and R Rnn is upper triangular. The cost is 1 2mn2 + O(mn) ops. If A is overwritten by Q,
Gaussian Elimination as a Matrix Factorization
Each of the elementary row operations from Gaussian Elimination (GE) has associated with it a nonsingular matrix with the property that multiplying (on t
Matrix Arithmetic
If you dont remember how to add matrices, you should look it up now. Here we are going to talk about matrix products. Let A Rmn and B Rnp . Lets also say that the matrix A is the coo
Sensitivity of Linear Least Squares
Assume that A Rmn has full column rank. We know that the problem min Ax b
x 2
(1)
has a unique solution, say xLS , which satises the (nonsingular) normal equations
Linear Least Squares Computations
Assuming that A Rmn has linearly independent columns, the problem arg min Ax b
x 2
(1)
has a unique solution, say xLS , which is also the unique solution to the norma
Projections
With the inner product <x, y>, we have angles (<x, y>= x 2 y 2 cos (), and can speak of orthogonality: x y <x, y>= 0. Here we will consider the standard inner product for Rn : <x, y> xt y
Two QR Factorizations
We compare two techniques for QR factorizations of a full-rank matrix A Rmn , with m n. While there are a few other methods available for use, we will talk here about the modied
C:\matlabR12\work\rk2.m December 10, 2010
Page 1 3:52:54 PM
function w = rk2(fofty,a,b,alpha,N) % function w = rk2(fofty,a,b,alpha,N) is a Matlab program to solve the IVP % by using modified euler met
C:\matlabR12\work\rf.m December 10, 2010
Page 1 3:50:10 PM
w w w w w
= = = = =
euler euler euler euler euler
('fofty',0,1,1/3,5); ('fofty',0,1,1/3,20); ('fofty',0,1,1/3,80); ('fofty',0,1,1/3,320); ('f
C:\matlabR12\work\euler.m December 10, 2010
Page 1 3:51:31 PM
function w = euler (fofty,a,b,alpha,N) % function w = rk2(fofty,a,b,alpha,N) is a Matlab program to solve the IVP % by using euler method.
Norms of Vectors
When we want to measure the length of, or distance between, vectors we need a yardstick that measures in a consistent way, generalizing the idea of absolute value to vector spaces. We
Cancellation and Swamping
The IEEE standard 754 requires that the FAFA holds. That is: any arithmetic operation on two oats returns the oat nearest the true value. Here we discuss two principle ways i
The Singular Value Decomposition
Let A Rmn . Then there exist orthogonal matrices U Rmm , V Rnn , and a diagonal matrix of singular values = diag(1 , 2 , . . . , p ), where p = min (m, n) and 1 2 p 0,
QR Iterations
Consider the iteration Qi Ri Ai Ai+1 Ri Qi
Here we have rst computed the QR factorization of Ai , and then reversed their product to form Ai+1 . From Ai = Qi Ri we have Ri = Qt Ai , and
A = LDM t
If A is nonsingular and A = LU , then we can set D = diag(U ) and (since D is nonsinglar) M t D1 U is a unit upper triangular matrix and A = LDM t . There is no inherent benet to this factor
The Inverse Power Method
Assume that A Cnn has a n linearly independent eigenvectors v1 , v2 , . . . , vn , and associated eigenvalues 1 , 2 , . . . , n , with |1 | > |2 | |i |, i = 3, 4, . . . , n. T
The Action of Householder Reectors
Let A Rmn , m n. You may recall that the Householder QR factorization can be written as Hp H2 H1 A = R, where p = min (n, m 1) and Hk = I ( ut2uk )uk ut is a Househo
Sensitivity of Simple Eigenvalues
Let A Cnn . We would like to know how small perturbations in A change its eigenvalues. Of course k /aij measures just this sensitivity, but it isnt practical to compu
Conditioning and Stability
A problem is well conditioned if a small change in the input creates a small change in the output (solution). A computation is backward stable if it produces the exact solut
Condition Numbers
A problem is well conditioned if a small change in the input always creates a small change in the output (solution). A problem is ill-conditioned if a small change in the input can c
Comparing
How do we compare two real numbers, say a and b? a b or b a |a b|
|ab| |a| |ab| |b | |ab| 100 |a| log10 ( |a|b| ) a
dierence absolute (or symmetric) dierence relative (to a) dierence relativ
The Cholesky Factorization
Symmetric matrices are important because they are common in applications, have some very nice properties, and because the symmetry can be exploited by algorithms to save tim
Classical GramSchmidt vs Modied GramSchmidt
Let A Rmn , with m n, and let A have n linearly independent columns a1 , a2 , . . . , an . There are many ways to implement the GramSchmidt process. Here ar
Solving Square Triangular Systems
When you were taught Gaussian Elimination, you were probably taught to stop the elimination process when you achieved an upper triangular form, at which point you cou
Floating Point Numbers
Most numbers cannot be represented in a computer. We are forced to use approximations which can be represented on the computer. Let the oating point approximation of x be called