This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Most library routines for onedimensional optimization are
based on this hybrid approach Then move to new point along straight line from current
point having highest function value through centroid of
other points Popular combination is golden section search and
successive parabolic interpolation, for which no derivatives
are required New point replaces worst point, and process is repeated
Direct search methods are useful for nonsmooth functions
or for small n, but expensive for larger n
< interactive example > Michael T. Heath
Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Scientiﬁc Computing 29 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Steepest Descent Method Scientiﬁc Computing 30 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Steepest Descent, continued Let f : Rn → R be realvalued function of n real variables Given descent direction, such as negative gradient,
determining appropriate value for αk at each iteration is
onedimensional minimization problem At any point x where gradient vector is nonzero, negative
gradient, − f (x), points downhill toward lower values of f min f (xk − αk f (xk ))
αk In fact, − f (x) is locally direction of steepest descent: f
decreases more rapidly along direction of negative
gradient than along any other that can be solved by methods already discussed
Steepest descent method is very reliable: it can always
make progress provided gradient is nonzero Steepest descent method: starting from initial guess x0 ,
successive approximate solutions given by But method is myopic in its view of function’s behavior, and
resulting iterates can zigzag back and forth, making very
slow progress toward solution xk+1 = xk − αk f (xk )
where αk is line search parameter that determines how far
to go in given direction
Michael T. Heath Michael T. Heath
Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Scientiﬁc Computing In general, convergence rate of steepest descent is only
linear, with constant factor that can be arbitrarily close to 1
31 / 74 Michael T. Heath Scientiﬁc Computing 32 / 74 Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Example: Steepest Descent Example, continued Use steepest descent method to minimize
xk f (x) = 0.5x2 + 2.5x2
1
2
Gradient is given by
Taking x0 = f (x) = 5
, we have
1 5.000
3.333
2.222
1.481
0.988
0.658
0.439
0.293
0.195
0.130 x1
5x2 f (x0 ) = 5
5 Performing line search along negative gradient direction,
min f (x0 − α0 f (x0 ))
α0 exact minimum along line is given by α0 = 1/3, so next
3.333
approximation is x1 =
−0.667
Michael T. Heath
Optimization Problems
OneDimensional Optimization...
View
Full
Document
This note was uploaded on 10/16/2011 for the course MECHANICAL 581 taught by Professor Wasfy during the Fall '11 term at IUPUI.
 Fall '11
 Wasfy
 Mechanical Engineering

Click to edit the document details