Inner products and Norms
Inner product of 2 vectors
(Read sec. 2.2 )
Inner product of 2 vectors x and y in Rn:
x1y1 + x2y2 + + xnyn in Rn
Notation: (x, y ) or y T x
For complex vectors
(x, y ) = x1y1 + x2y2 + + xnyn in Cn
Note: (x, y ) = y H x
An import
SPARSITY, ITERATIVE METHODS,
AND APPLICATIONS
Brief overview of sparsity
Basic iterative schemes
Reordering techniques
Applications
Typical Problem:
Physical Problem
Nonlinear PDEs
Discretization
Linearization (Newton)
Sequence of Sparse Linear System
EIGENVALUE PROBLEMS
Background on eigenvalues/ eigenvectors / decompositions 7.1
Perturbation analysis, condition numbers. 7.2
Power method 7.3
The QR algorithm 7.5
Practical QR algorithms: use of Hessenberg form and
shifts 7.4
The symmetric eigenva
The QR algorithm
The most common method for solving small (dense)
eigenvalue problems. The basic algorithm:
QR without shifts
1. Until Convergence Do:
2.
Compute the QR factorization A = QR
3.
Set A := RQ
4. EndDo
Until Convergence means Until A becomes c
A few applications of the SVD
Many methods require to approximate the original data
(matrix) by a low rank matrix before attempting to solve
the original problem
Regularization methods require the solution of a least-
squares linear system Ax = b approxi
Orthogonality The Gram-Schmidt algorithm
1. Two vectors u and v are orthogonal if (u, v ) = 0.
2. A system of vectors cfw_v1, . . . , vn is orthogonal if (vi, vj ) =
0 for i = j ; and orthonormal if (vi, vj ) = ij
3. A matrix is orthogonal if its columns
THE SINGULAR VALUE DECOMPOSITION
The SVD existence - properties.
Pseudo-inverses and the SVD
Use of SVD for least-squares problems
Applications of the SVD
Text: mainly sect. 2.4
The Singular Value Decomposition (SVD)
Theorem For any matrix A Rmn ther
SOLVING LINEAR SYSTEMS OF EQUATIONS
See Chapter 3 of text
Background on linear systems
Gaussian elimination and the Gauss-Jordan algorithms
The LU factorization
Gaussian Elimination with pivoting
Background: Linear systems
The Problem: A is an n n ma
CSCI 5304
Fall 2013
COMPUTATIONAL ASPECTS OF MATRIX THEORY
Class time : MW 9:45-11:00am
Room
: KHKH 3-111
Instructor
: Yousef Saad
Class Web-site:
www-users.cselabs.umn.edu/classes/Fall-2013/csci5304/
September 3, 2013
Let us begin .
Lecture notes will b
ERROR AND SENSITIVTY ANALYSIS FOR SYSTEMS
OF LINEAR EQUATIONS
Read parts of sections 2.6 and 3.5.3
Conditioning of linear systems.
Estimating errors for solutions of linear systems
Backward error analysis
Relative element-wise error analysis
Perturba
SPECIAL LINEAR SYSTEMS OF EQUATIONS
Symmetric positive denite matrices.
The LDLT decomposition; The Cholesky factorization
Banded systems
Positive-Denite Matrices
A real matrix is said to be positive denite if
(Au, u) > 0 for all u = 0 u Rn
Let A be
FLOATING POINT ARITHMETHIC - ERROR ANALYSIS
Brief review of oating point arithmetic
Model of oating point arithmetic
Notation, backward and forward errors
Read: Section 2.7 of text.
Roundo errors and oating-point arithmetic
The basic problem: The set
LARGE SPARSE EIGENVALUE PROBLEMS
Projection methods
The subspace iteration
Krylov subspace methods: Arnoldi and Lanczos
Golub-Kahan-Lanczos bidiagonalization
General Tools for Solving Large Eigen-Problems
Projection techniques Arnoldi, Lanczos, Subsp