232 0074 0113 0074 0074 0071 0071 0071 0071 0071 new

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ation Scientific Computing 22 / 74 Golden Section Search Successive Parabolic Interpolation Newton’s Method Successive Parabolic Interpolation f1 0.074 0.122 0.074 0.074 0.085 0.074 0.071 0.072 0.071 0.071 x2 1.236 0.764 0.944 0.764 0.652 0.695 0.721 0.695 0.705 0.711 Fit quadratic polynomial to three function values Take minimum of quadratic to be new approximation to minimum of function f2 0.232 0.074 0.113 0.074 0.074 0.071 0.071 0.071 0.071 0.071 New point replaces oldest of three previous points and process is repeated until convergence Convergence rate of successive parabolic interpolation is superlinear, with r ≈ 1.324 < interactive example > Michael T. Heath Scientific Computing 23 / 74 Michael T. Heath Scientific Computing 24 / 74 Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization Golden Section Search Successive Parabolic Interpolation Newton’s Method Golden Section Search Successive Parabolic Interpolation Newton’s Method Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization Example: Successive Parabolic Interpolation Example, continued Use successive parabolic interpolation to minimize f (x) = 0.5 − x exp(−x2 ) xk 0.000 0.600 1.200 0.754 0.721 0.692 0.707 f (xk ) 0.500 0.081 0.216 0.073 0.071 0.071 0.071 < interactive example > Michael T. Heath Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization Scientific Computing 25 / 74 Scientific Computing Michael T. Heath Golden Section Search Successive Parabolic Interpolation Newton’s Method Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization Newton’s Method 26 / 74 Golden Section Search Successive Parabolic Interpolation Newton’s Method Example: Newton’s Method Use Newton’s method to minimize f (x) = 0.5 − x exp(−x2 ) Another local quadratic approximation is truncated Taylor series f (x) 2 f (x + h) ≈ f (x) + f (x)h + h 2 First and second derivatives of f are given by f (x) = (2x2 − 1) exp(−x2 ) and By differentiation, minimum of this quadratic function of h is given by h = −f (x)/f (x) f (x) = 2x(3 − 2x2 ) exp(−x2 ) Newton iteration for zero of f is given by Suggests iteration scheme xk+1 = xk − (2x2 − 1)/(2xk (3 − 2x2 )) k k xk+1 = xk − f (xk )/f (xk ) Using starting guess x0 = 1, we obtain which is Newton’s method for solving nonlinear equation f (x) = 0 xk 1.000 0.500 0.700 0.707 Newton’s method for finding minimum normally has quadratic convergence rate, but must be started close enough to solution to converge < interactive example > Michael T. Heath Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization Scientific Computing 27 / 74 Golden Section Search Successive Parabolic Interpolation Newton’s Method Michael T. Heath Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization Safeguarded Methods f (xk ) 0.132 0.111 0.071 0.071 Scientific Computing 28 / 74 Unconstrained Optimization Nonlinear Least Squares Constrained Optimization Direct Search Methods Direct search methods for multidimensional optimization make no use of function values other than comparing them As with nonlinear equations in one dimension, slow-but-sure and fast-but-risky optimization methods can be combined to provide both safety and efficiency For minimizing function f of n variables, Nelder-Mead method begins with n + 1 starting points, forming simplex in Rn...
View Full Document

Ask a homework question - tutors are online