*There is no eigenvalue estimate. If you want to calculate an estimate, use the Rayleigh
Quotient
If
is an eigenvector of unit norm (which in method 2, is),
thus,

Both forms guarantee a true eigenvalue/eigenvector with an
number of iterations provided,
1) There is a unique dominant eigenvalue
, (dominant must be real)
2) The initial guess,
, must have a nonzero component in the direction of the true eigenvector
Solution: Always choose
to be
,or if
Convergence for the power method is linear
Inverse Power Method
finds the smallest magnitude eigenvalue/eigenvector
therefore
is an eigenvalue of
**the maximum magnitude eigenvalue of
is the minimum magnitude eigenvalue of
Method:
choose
calculate
Let
the largest magnitude value of
calculate
Iterate...
After convergence,
*Note that the convergence is still linear
The Power Method and Inverse Power Method may seem limiting, but the smallest and largest
eigenvalues/eigenvectors are still the most important in most applications, yet we can still find
the other eigenvalues/eigenvectors by using the “spectral shift”
Shifted-Inverse Power Method (Spectral Shift)
can be used with an initial eigenvalue guess,
, to find a true eigenvalue of a given
matrix, closest to the value
If
then
This implies that the eigenvectors remain unchanged, but the eigenvalues are shifted by
Method:
choose
calculate
Let
the largest magnitude value of
calculate
Iterate...
After convergence,
In practice, the power methods are not used, except when you are solely interested in the

largest or smallest eigenvalue/eigenvector
**When you have an estimate of an eigenvalue, you can use the Shifted Inverse Power Method
to hone in on the true eigenvalues/eigenvectors
Rayleigh Quotient Iteration
exists because the power methods are restricted to linear convergence
*Iff A is symmetric, we can get cubic convergence
Method:
choose
such that
choose
such that
calculate
*must calculate inverse b/c
changes each
iteration
calculate
calculate
...Iterate until convergence
*Convergence is cubic for almost all
**Method converges to just an eigenvalue/eigenvector; this depends on
and
Stopping Condition for Power Methods
Let
, then the stopping condition becomes
is defined as is, because when
, the true eigenvalue/eigenvector is found (we converge)
Where Z is different depending on which method is used
For Power Method
For Inverse Power Method
For Spectral Shift Method
For Quotient Method
__________________________________________________________________________
Nonlinear Systems
a system is nonlinear if it does not meet the additivity or homogeneity properties
a nonlinear problem is compounded because a nonlinear equation can have multiple solutions
(0,1,2,3,...,
solutions, *you never know how many)
Examples

2 solutions
3 solutions
n solutions
solutions when
no solutions
Two Types of Root-Finding Methods
1) Bracket
2) Open
Bracket Methods
assume we have a continuous function f(x)
since
and
, there exists
such that
since
and
, there exists
such that
* This is only true if f(x) is continuous
If there exists points
such that
, then there exists AT LEAST one x
such that
and
, thus
and

#### You've reached the end of your free preview.

Want to read all 31 pages?

- Spring '12
- vasd
- Numerical Analysis, Linear Systems, Method, Convergence