This preview shows page 1. Sign up to view the full content.
Unformatted text preview: quadratic objective function, CG is theoretically exact
after at most n iterations, where n is dimension of problem
CG is effective for general unconstrained minimization as
well Alternative formula for βk+1 is
T
βk+1 = ((gk+1 − gk )T gk+1 )/(gk gk ) Michael T. Heath
Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Scientiﬁc Computing 49 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Example: Conjugate Gradient Method T At this point, however, rather than search along new
negative gradient, we compute instead
T
T
β1 = (g1 g1 )/(g0 g0 ) = 0.444 which gives as next search direction −5
−5 f (x1 ) = Michael T. Heath
Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Minimum along this direction is given by α1 = 0.6, which
gives exact solution at origin, as expected for quadratic
function 3.333
−3.333 Scientiﬁc Computing < interactive example >
51 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization 52 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Given data (ti , yi ), ﬁnd vector x of parameters that gives
“best ﬁt” in least squares sense to model function f (t, x),
where f is nonlinear function of x Small number of iterations may sufﬁce to produce step as
useful as true Newton step, especially far from overall
solution, where true Newton step may be unreliable
anyway Deﬁne components of residual function
ri (x) = yi − f (ti , x),
so we want to minimize φ(x) = Good choice for linear iterative solver is CG method, which
gives step intermediate between steepest descent and
Newtonlike step Gradient vector is
is Scientiﬁc Computing i = 1, . . . , m 1T
2 r (x)r (x) φ(x) = J T (x)r (x) and Hessian matrix
m Hφ (x) = J T (x)J (x) + Since only matrixvector products are required, explicit
formation of Hessian matrix can be avoided by using ﬁnite
difference of gradient along given vector ri (x)Hi (x)
i=1 where J (x) is Jacobian of r (x), and Hi (x) is Hessian of
ri (x)
53 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Michael T. Heath
Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Nonlinear Least Squares, continued Scientiﬁc Computing 54 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization GaussNewton Method
This motivates GaussNewton method for nonlinear least
squares, in which secondorder term is dropped and linear
system
J T (xk )J (xk )sk = −J T (xk )r (xk ) Linear system for Newton step is
m J T (xk )J (xk ) + Scientiﬁc Computing Nonlinear Least Squares Another way to reduce work in Newtonlike methods is to
solve linear system for Newton step by iterative method Michael T. Heath Michael T. Heath
Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Truncated Newton Methods Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization −3.333
−5
−5.556
+ 0.444
=
3.333
−5
1.111 s1 = −g1 + β1 s0 = Exact minimum along line is given by α0 = 1/3, so next
T
approximation is x1 = 3.333 −0.667 , and we compute
new gradient,
g1 = 50 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization So far there is no difference from steepest descent method , initial search direction is negative s0 = −g0 = − f (x0 ) = Scientiﬁc Computing Example, continued Us...
View Full
Document
 Fall '11
 Wasfy
 Mechanical Engineering

Click to edit the document details