Lecture # 9
The Algebraic Eigenvalue Problem: A brief introduction
Consider
A Rnn , or Cnn
Here we really need complex numbers!
In general, we have
Ax = x,
eigenvalue, x eigenvector
Eigenvalues and eigenvectors have many applications! In physics, these a
Lecture # 2
Linear Algebra Overview and Review
Complex Numbers Necessary to nd roots of real polynomials.
i2 = 1,
Conjugate
z = x + iy,
z = x iy,
Modulus
|z| = (z z)1/2 =
x2 + y 2 .
Complex numbers will be important to us in the study of eigenvalues,
Four
Lecture # 18
Iterative Methods for Large Linear Systems: Part II
We wish to solve
Ax = b
where A Rnn is nonsingular. We then of n are being VERY LARGE, say,
n = 106 or n = 107 . Usually, the matrix is also sparse (mostly zeros) and
Gaussian elimination is
Computer Science/Mathematics 456
Homework One
Due 20 January 2015
1. Consider the following problem from p.285 of your text.
miny y1
s.t. Jy = b
where
(1)
(2)
1 2 1
0 0
J = 0 1 2 1 0
0 0
1 2 1
1
b= 8
2
Set up the linear program to solve (1)(2).
The ve
Computer Science/Mathematics 456
Homework Two
Due 3 February 2017
1. Consider the Householder transformation matrix
H = I 2wwT ,
w2 = 1.
(a) Give the eigenvalues of this matrix.
(b) Show that its eigenvector matrix is a Householder transformation
Q such t
Computer Science/Mathematics 456
Homework Three
Due 17 February 2017
1. Consider the n n tridiagonal matrix
T =
b 0
a b
b a
b
. . . . . . .
.
.
.
.
.
.
. . . . . . . . .
.
.
.
.
. .
.
b
a
b
0
b
a
a
b
0
.
.
.
.
Show that vector
vj = (v1j , v2j
Lecture # 1
Introduction
Numerical analysis is the science of computing the solutions of problems
that are posed mathematically in the field of real or complex numbers.
Today I will just give a couple of examples.
Example 1 Computation of 3.14159265358979
Lecture # 20
The Preconditioned Conjugate Gradient Method
We wish to solve
Ax = b
(1)
where A Rnn is symmetric and positive definite (SPD). We then of n are
being VERY LARGE, say, n = 106 or n = 107 . Usually, the matrix is also
sparse (mostly zeros) and
Lecture # 17
Iterative Methods for Large Linear Systems
The Model Problem Iterative methods were originally designed with
partial differential equations in mind. However, they come up in many other
contexts. The most notable are Markov modeling and large
Lecture # 6
The GramSchmidt Algorithms
Let X Rmn , m n be such that rank(X) = n. That is,
Xy = 0,
iff y = 0.
The problem is to find yLS such that
kb XyLS k2 = minn kb Xyk22
(1)
yR
We also want
rLS = b XyLS .
Our approach, compute the QR decomposition , th
Lecture # 4
Matrix Norms, Orthogonality, and Least Squares
More on Matrix Norms
In MATLAB, for matrices and vectors
X2
X1
X
XF
=
=
=
=
norm(X),
norm(X, 1),
norm(X, inf )
norm(X, f ro ).
These will be our four favorite norms.
Some inequalities for matrix n
Nonlinear Least Squares
To our notes on Newtons method for minimization, we add that quadratic
convergence is expected. That is, if x is a critical point (not necessarily a
minimum), and xk is the kth iterate then
kxk+1 x k2
= C
k kxk x k2
2
lim
for some
Lecture # 3
Orthogonal Matrices and Matrix Norms
We repeat the denition an orthogonal set and orthornormal set.
Definition 1 A set of k vectors cfw_u1 , u2 , . . . , uk , where each ui Rn , is
said to be an orthogonal with respect to the inner product (,
Unconstrained Optimization
More or less these notes are just a summary of section 9.2 of Ascher and
Grief.
We consider the minimization problem
min (x)
(1)
where : Rn R.
Example 1 Let x = (x1 , x2 )T and let
(x) = (x1 3)2 + (x2 4)4 + 15
This function has
Lecture # 5
The Linear Least Squares Problem
Let X Rmn , m n be such that rank(X) = n. That is,
Xy = 0,
iff y = 0.
The problem is to find yLS such that
kb XyLS k2 = minn kb Xyk22
yR
(1)
We also want
rLS = b XyLS .
Our approach, compute the QR decompositio
CSE/Math 455
Lecture # 9
We are still looking at solving
Ax = b.
Our algorithm is Gaussian elimination with partial pivoting. It factors A
into
A
= P LU
P
, permutation matrix
L , lower triangular matrix
U , upper triangular matrix
P T represented as p =
Lecture # 19
The Conjugate Gradient Method
We wish to solve
Ax = b
(1)
where A Rnn is symmetric and positive definite (SPD). We then of n are
being VERY LARGE, say, n = 106 or n = 107 . Usually, the matrix is also
sparse (mostly zeros) and Cholesky factor
Lecture # 20a
The Conjugate Gradient Method for Linear Least Squares
Now let us go back to the solution of
yLS = arg minyRn kb Xyk2
(1)
We note that
1
1
1
kb Xyk22 = yT X T Xy yT c + bT b
2
2
2
where c = X T b. Since the last term is just a constant, we h
"!$#%&' ()*+&
,.-/01/%2/%3547698:4;354<5=?>[email protected][email protected]/%DLKNMPOQR>%42JS/PTI<547<5/U;V4T
KaQ
WYX[Z]\ +K ^&Q`_ bSa c&d&ePcgf$chd
_ j ilk XnmVo
Kb Q
dic
_ j ilk Xnmqp
KNrsQ
dic
m F k XmoEp cfw_
t -/U3C/ e =?T.TvuDFDE/U<C3C=?>P472;@S