This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Dimensional Optimization
MultiDimensional Optimization BFGS Method Scientiﬁc Computing 44 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization BFGS Method, continued
In practice, factorization of Bk is updated rather than Bk
itself, so linear system for sk can be solved at cost of O(n2 )
rather than O(n3 ) work One of most effective secant updating methods for minimization
is BFGS Unlike Newton’s method for minimization, no second
derivatives are required x0 = initial guess
B0 = initial Hessian approximation
for k = 0, 1, 2, . . .
Solve Bk sk = − f (xk ) for sk
xk+1 = xk + sk
yk = f (xk+1 ) − f (xk )
T
T
Bk+1 = Bk + (yk yk )/(yk sk ) − (Bk sk sT Bk )/(sT Bk sk )
k
k
end Can start with B0 = I , so initial step is along negative
gradient, and then second derivative information is
gradually built up in approximate Hessian matrix over
successive iterations
BFGS normally has superlinear convergence rate, even
though approximate Hessian does not necessarily
converge to true Hessian
Line search can be used to enhance effectiveness Michael T. Heath
Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Scientiﬁc Computing 45 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Example: BFGS Method T xk
5.000
1.000
0.000 −4.000
−2.222
0.444
0.816
0.082
−0.009 −0.015
−0.001
0.001 and B0 = I , initial step is negative x1 = x0 + s0 = 5
−5
0
+
=
1
−5
−4 46 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization f (xk )
5.000
5.000
0.000 −20.000
−2.222
2.222
0.816
0.408
−0.009
−0.077
−0.001
0.005 For quadratic objective function, BFGS with exact line
search ﬁnds exact solution in at most n iterations, where n
is dimension of problem
< interactive example > Then new step is computed and process is repeated
Scientiﬁc Computing f (xk )
15.000
40.000
2.963
0.350
0.001
0.000 Increase in function value can be avoided by using line
search, which generally enhances convergence Updating approximate Hessian using BFGS formula, we
obtain
0.667 0.333
B1 =
0.333 0.667 Michael T. Heath Scientiﬁc Computing Example: BFGS Method Use BFGS to minimize f (x) = 0.5x2 + 2.5x2
1
2
x1
Gradient is given by f (x) =
5x2
Taking x0 = 5 1
gradient, so Michael T. Heath
Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization 47 / 74 Michael T. Heath Scientiﬁc Computing 48 / 74 Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Conjugate Gradient Method Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Conjugate Gradient Method, continued
x0 = initial guess
g0 = f (x0 )
s0 = −g0
for k = 0, 1, 2, . . .
Choose αk to minimize f (xk + αk sk )
xk+1 = xk + αk sk
gk+1 = f (xk+1 )
T
T
βk+1 = (gk+1 gk+1 )/(gk gk )
sk+1 = −gk+1 + βk+1 sk
end Another method that does not require explicit second
derivatives, and does not even store approximation to
Hessian matrix, is conjugate gradient (CG) method
CG generates sequence of conjugate search directions,
implicitly accumulating information about Hessian matrix
For...
View Full
Document
 Fall '11
 Wasfy
 Mechanical Engineering

Click to edit the document details