(+) Generalizes to higher dimension
(-) need the derivative
(-) can diverge (you can only control the initial guess)
In the case where we cannot or don’t want to calculate f’(x), we can use forward difference,
backward difference, or central difference to calculate f’(x)
Secant Method
use backward difference (because we only know two previous guesses) to approximate
the derivative
, where
*lose quadratic convergence, but we do not have to calculate the derivative
*need two initial guesses
recall that the smaller the step size, the more accurate the approximate
*When the Secant method is close to converging, it will behave like the Newton-Raphson
method
Better than linear convergence, but not quadratic
Advantages and Disadvantages
(+) Only need function evaluation (no derivatives needed)
(+) Generally have better than linear convergence (but no better than Newton-Raphson)
(+/-) Can be generalized to higher dimensions...but it is intense
(-) Can diverge
The best method, out of the so far talked about open/bracketed methods, is the Newton-
Raphson method, but the only downside is you have to calculate the derivative
Matlab Functions
MATLAB has the function fzeros( ) which used both open and bracketed methods when
calculating a root of a function
, * Note
can be an initial guess or a bracket
Nonlinear Systems of Equations

let
, then
We could use Fixed-Point Iteration, such that
but as before, we can get better
performance
by using Newton-Raphson
Consider a system of two equations,
Apply Taylor’s Theorem,
In general,
compare with linear system
J is known as the Jacobian. We want to find the root, so solve for
, therefore
is the Newton Raphson for multi dimensions
*Convergence is still quadratic, but J has
elements
To avoid calculating J, we could apply a multi-dimensional Secant method

Optimization
many engineering problems involve maximizing or minimizing functions
Examples
minimizing fuel to transfer between two orbits
minimizing drag on an aircraft (altitude and velocity)
maximizing profit
To solve an optimization problem, we need a function (performance index) and a set of
independent variables
Example
Consider a scalar function f(x)
When a function has maximum and minimum points, or optimization points, f’(x)=0
MAX
MIN
f(x) is concave down
f(x) is concave up
f’(x)=0
f’(x)=0
f’’(x)<0
f’’(x)>0
When f’’(x)=0, we do not know if there is a min or max
*Note that functions can have many local min/max, but only one global min/max
1) Find an optimal point by solving f’(x)=0

#### You've reached the end of your free preview.

Want to read all 31 pages?

- Spring '12
- vasd
- Numerical Analysis, Linear Systems, Method, Convergence