This preview shows page 1. Sign up to view the full content.
Unformatted text preview: min f (x) subject to g (x) = 0
Rn where f :
→ R and g : Rn → Rm , with m ≤ n, we seek
critical point of Lagrangian L(x, λ) = f (x) + λT g (x) In this method, linear system at each iteration is of form
(J T (xk )J (xk ) + µk I )sk = −J T (xk )r (xk ) Applying Newton’s method to nonlinear system where µk is scalar parameter chosen by some strategy
L(x, λ) = Corresponding linear least squares problem is
J (xk )
−r (xk )
√
s∼
=
µk I k
0 Scientiﬁc Computing T
f (x) + Jg (x)λ
=0
g (x) we obtain linear system
T
B (x, λ) Jg (x)
Jg (x)
O With suitable strategy for choosing µk , this method can be
very robust in practice, and it forms basis for several
effective software packages
< interactive example >
Michael T. Heath 60 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization EqualityConstrained Optimization LevenbergMarquardt method is another useful alternative
when GaussNewton approximation is inadequate or yields
rank deﬁcient linear least squares subproblem Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Scientiﬁc Computing s
=−
δ T
f (x) + Jg (x)λ
g (x) for Newton step (s, δ ) in (x, λ) at each iteration
Michael T. Heath 61 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Sequential Quadratic Programming Scientiﬁc Computing 62 / 74 Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization Merit Function
Once Newton step (s, δ ) determined, we need merit
function to measure progress toward overall solution for
use in line search or trust region Foregoing block 2 × 2 linear system is equivalent to
quadratic programming problem, so this approach is
known as sequential quadratic programming Popular choices include penalty function Types of solution methods include φρ (x) = f (x) + 1 ρ g (x)T g (x)
2 Direct solution methods, in which entire block 2 × 2 system
is solved directly
Range space methods, based on block elimination in block
2 × 2 linear system
Null space methods, based on orthogonal factorization of
T
matrix of constraint normals, Jg (x) and augmented Lagrangian function
Lρ (x, λ) = f (x) + λT g (x) + 1 ρ g (x)T g (x)
2
where parameter ρ > 0 determines relative weighting of
optimality vs feasibility
Given starting guess x0 , good starting guess for λ0 can be
obtained from least squares problem
J T (x0 ) λ0 ∼ − f (x0 )
= < interactive example > g Michael T. Heath Scientiﬁc Computing 63 / 74 Michael T. Heath Scientiﬁc Computing 64 / 74 Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Unconstrained Optimization
Nonlinear Least Squares
Constrained Optimization InequalityConstrained Optimization Penalty Methods
Merit function can also be used to convert
equalityconstrained problem into sequence of
unconstrained problems Methods just outlined for equality constraints can be
extended to handle inequality constraints by using...
View
Full
Document
This note was uploaded on 10/16/2011 for the course MECHANICAL 581 taught by Professor Wasfy during the Fall '11 term at IUPUI.
 Fall '11
 Wasfy
 Mechanical Engineering

Click to edit the document details