This preview shows page 1. Sign up to view the full content.
Unformatted text preview: e CG method to minimize f (x) = 0.5x2 + 2.5x2
1
2
x1
Gradient is given by f (x) =
5x2
Taking x0 = 5 1
gradient, Michael T. Heath
Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization ri (xk )Hi (xk ) sk = −J T (xk )r (xk ) is solved for approximate Newton step sk at each iteration i=1 This is system of normal equations for linear least squares
problem
J (xk )sk ∼ −r (xk )
= m Hessian matrices Hi are usually inconvenient and
expensive to compute
Moreover, in Hφ each Hi is multiplied by residual
component ri , which is small at solution if ﬁt of model
function to data is good which can be solved better by QR factorization
Next approximate solution is then given by
xk+1 = xk + sk
and process is repeated until convergence Michael T. Heath Scientiﬁc Computing 55 / 74 Michael T. Heath Scientiﬁc Computing 56 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Example: GaussNewton Method Example, continued Use GaussNewton method to ﬁt nonlinear model function T If we take x0 = 1 0 , then GaussNewton step s0 is
given by linear least squares problem −1
−1
0 −1 −1 s0 ∼ 0.3
−1 −2 = 0.7
0.9
−1 −3 f (t, x) = x1 exp(x2 t)
to data
t
y 0.0
2.0 1.0
0.7 2.0
0.3 3.0
0.1 For this model function, entries of Jacobian matrix of
residual function r are given by
{J (x)}i,1
{J (x)}i,2 whose solution is s0 = ∂ri (x)
=
= − exp(x2 ti )
∂x1 Michael T. Heath Scientiﬁc Computing 0.69
−0.61 Then next approximate solution is given by x1 = x0 + s0 ,
and process is repeated until convergence ∂ri (x)
=
= −x1 ti exp(x2 ti )
∂x2 Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Michael T. Heath 57 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Example, continued Scientiﬁc Computing 58 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization GaussNewton Method, continued xk
1.000
1.690
1.975
1.994
1.995
1.995 r (xk )
2.390
0.212
0.007
0.002
0.002
0.002 0.000
−0.610
−0.930
−1.004
−1.009
−1.010 GaussNewton method replaces nonlinear least squares
problem by sequence of linear least squares problems
whose solutions converge to solution of original nonlinear
problem 2
2 If residual at solution is large, then secondorder term
omitted from Hessian is not negligible, and GaussNewton
method may converge slowly or fail to converge
In such “largeresidual” cases, it may be best to use
general nonlinear minimization method that takes into
account true full Hessian matrix < interactive example > Michael T. Heath
Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Scientiﬁc Computing 59 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Michael T. Heath
Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization LevenbergMarquardt Method For equalityconstrained minimization problem...
View Full
Document
 Fall '11
 Wasfy
 Mechanical Engineering

Click to edit the document details