X0 = (X0 , Y0 , Z0 ) : 3-D coordinates of a point at time t0
T = (T1 , T2 , T3 ) : Translation vector
spherical coordinates: slant, T , and tilt, T :
T = (sinT cosT , sinT sinT , cosT ) .
(1)
q = (q0 , q1 , q2 , q3 ) : Quaternion
q = (sin n, cos )
2
2
Rot
Lecture-17
Theorems 5.8 and 5.9
&
Levenberg-Marquadet
Convergence of Algorithms with restarts
We can use Theorem 5.7 to prove global convergence for
algorithms, which are periodically started by setting k = 0
If restarts occur at k1 , k2 , K
Since at the
Lecture-16
Lemma 5.6 & Theorem 5.7
Lemma 5.6
Suppose that the Algorithm 5.4 is implemented with a step length
such that it satisfies strong wolf conditions with 0<c2 <1/2. Then
the method generates the descent directions pk that satisfies the
following in
Lecture-15
Homework, Rate of Convergence of
CG, preconditioning, FR-GC, PR-GC
Homework (Due April 17)
5.1
5.9
Proof for Theorem 5.5 (see the slides)
1
Theorem 5.4
If A has only r distinct eigenvalues, then the CG iteration
will terminate at the solutio
Lecture-14
Rate of Convergence of CG
Algorithm 5.2
Given x 0 ;
set r0 Ax 0 b, p0 r0 , k 0
While rk 0
k
T
kk
T
k
k
rr
;
p Ap
xk +1 xk + k p k ;
rk +1 rk + k Ap k ;
k +1
rkT+1 rk +1
;
rkT rk
We only need to know values
of x, p and r only for 2 iterations
Lecture-13
Model-base Video Compression
JPEG Baseline Coding
Divide image into blocks of size 8X8.
Level shift all 64 pixels values in each
block by subtracting 2 n-1, (where 2 n is the
maximum number of gray levels).
Compute 2D DCT of a block.
Quanti
Lecture-12
Theorems 5.3 and 5.2
Algorithms 5.1, 5.2
Proof
r r =0
for i = 0, K, k 1
T
ki
(1)
spancfw_r0 , r1 ,K , rk = spancfw_r0 , Ar0 ,K , A r0
k
(2)
spancfw_ p0 , p1 , K, pk = spancfw_r0 , Ar0 ,K , A r0
k
p Api = 0
T
k
Now Conjugacy (4):
(3)
for i =
Lecture-11
Theorems 5.3 and 5.2
Algorithms 5.1, 5.2
Theorem 5.3
1. The directions are indeed conjugate.
2. Therefore, the algorithm terminates in n steps (from
Theorem 5.1).
3. The residuals are mutually orthogonal.
4. Each direction pk and rk is containe
Lecture-10
Theorems 5.2 and 5.3
Algorithms 5.1, 5.2
Theorem 5.2
Let x0 be any starting point and suppose that the sequence
cfw_xk is generated by the conjugate direction algorithm. Then
rkT pi = 0
for i = 0, K , k 1
and xk is minimizer of
1
( x) = x T Ax
Lecture-9
Conjugate Direction Algorithm
(Solution of Linear System or
Minimization of A Quadratic
Function)
Conjugate Gradient
Linear conjugate gradient: for solving linear
systems Ax=b with PD matrix, A.
Exact solution in n steps (Hestenes & Stiefel, 1
Lecture-8
Conjugate Direction Algorithm
(Solution of Linear System or
Minimization of Quadratic Function)
Conjugate Gradient
Linear conjugate gradient: for solving linear
systems Ax=b with PD matrix, A.
Hestenes & Stiefel, 1950s
Non-linear conjugate gr
Lecture-6
Convergence and order of
convergence
Line Search Methods
xk +1 xk + k pk
pk Bk1f k
Steepest descent B is and identity matrix
Newton B is a Hessian matrix
Quasi-Newton B is approximation to the Hessian matrix
k
k
k
1
Line Search Methods
xk +1 xk
Lecture-5
Quadratic Functions
Quadratic Functions
1T
x Qx b T x
2
f ( x ) = Qx b
f (x) =
Q is symmetric, Hessian of f
if x * is a unique solution of Qx = b, then it is
a stationary point of f
If the linear system Q x=b can not be solved, then function
doe
Lecture-4
Line Search Methods: Search
Directions, and step lengths
Line Search Methods
xk +1 xk + k pk
pk Bk1f k
Steepest descent B is and identity matrix
Newton B is a Hessian matrix
Quasi-Newton B is approximation to the Hessian matrix
k
k
k
1
Inverse H
Lecture-3
Search Directions
Homework Due 1/25/01
2.1, 2.2, 2.3, 2.8, 2.13, 2.14
1
Rate of Convergence
Definition : Suppose cfw_ pn =0 is a sequence that
n
converges to p and that en = pn p
| p 1 p|
|e |
lim | pnn+ p | = lim | enn +1| =
n
n
then the se
Preliminaries
Lecture-2
Eigen Vectors and Eigen Values
The eigen vector, x, of a matrix A is a special vector, with
the following property
Ax = x
Where is called eigen value
To find eigen values of a matrix A first find the roots of:
det( I ) = 0
A
Then s