This preview shows page 1. Sign up to view the full content.
Unformatted text preview: MultiDimensional Optimization Scientiﬁc Computing 1.000
−0.667
0.444
−0.296
0.198
−0.132
0.088
−0.059
0.039
−0.026 33 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization f (xk )
15.000
6.667
2.963
1.317
0.585
0.260
0.116
0.051
0.023
0.010 f (xk )
5.000
−3.333
2.222
−1.481
0.988
−0.658
0.439
−0.293
0.195
−0.130 Scientiﬁc Computing Michael T. Heath
Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Example, continued 5.000
3.333
2.222
1.481
0.988
0.658
0.439
0.293
0.195
0.130 34 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Newton’s Method
Broader view can be obtained by local quadratic
approximation, which is equivalent to Newton’s method
In multidimensional optimization, we seek zero of gradient,
so Newton iteration has form
−
xk+1 = xk − Hf 1 (xk ) f (xk ) where Hf (x) is Hessian matrix of second partial
derivatives of f ,
{Hf (x)}ij = ∂ 2 f (x)
∂xi ∂xj < interactive example >
Michael T. Heath
Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Scientiﬁc Computing 35 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Michael T. Heath Newton’s Method, continued Use Newton’s method to minimize
f (x) = 0.5x2 + 2.5x2
1
2 Hf (xk )sk = − f (xk ) Gradient and Hessian are given by for Newton step sk , then take as next iterate f (x) = xk+1 = xk + sk As usual, Newton’s method is unreliable unless started
close enough to solution to converge
< interactive example >
Scientiﬁc Computing 37 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization 5
, we have
1 10
05 Michael T. Heath f (x0 ) = Scientiﬁc Computing 38 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Newton’s Method, continued
If objective function f has continuous second partial
derivatives, then Hessian matrix Hf is symmetric, and
near minimum it is positive deﬁnite In principle, line search parameter is unnecessary with
Newton’s method, since quadratic model determines
length, as well as direction, of step to next approximate
solution Thus, linear system for step to next iterate can be solved in
only about half of work required for LU factorization
Far from minimum, Hf (xk ) may not be positive deﬁnite, so
Newton step sk may not be descent direction for function,
i.e., we may not have When started far from solution, however, it may still be
advisable to perform line search along direction of Newton
step sk to make method more robust (damped Newton) f (xk )T sk < 0 Once iterates are near solution, then αk = 1 should sufﬁce
for subsequent iterations Scientiﬁc Computing and Hf (x) = Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Newton’s Method, continued Michael T. Heath x1
5x2 5
5
10
−5
Linear system for Newton step is
s=
, so
05 0
−5
5
−5
0
x1 = x0 + s0 =
+
=
, which is exact solution
1
−1
0
for this problem, as expected for quadratic function
Taking x0 = Convergence rate of Newton’s method for minimization is
normally quadratic...
View
Full
Document
This note was uploaded on 10/16/2011 for the course MECHANICAL 581 taught by Professor Wasfy during the Fall '11 term at IUPUI.
 Fall '11
 Wasfy
 Mechanical Engineering

Click to edit the document details