Heath 60 74 unconstrained optimization nonlinear

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: min f (x) subject to g (x) = 0 Rn where f : → R and g : Rn → Rm , with m ≤ n, we seek critical point of Lagrangian L(x, λ) = f (x) + λT g (x) In this method, linear system at each iteration is of form (J T (xk )J (xk ) + µk I )sk = −J T (xk )r (xk ) Applying Newton’s method to nonlinear system where µk is scalar parameter chosen by some strategy L(x, λ) = Corresponding linear least squares problem is J (xk ) −r (xk ) √ s∼ = µk I k 0 Scientific Computing T f (x) + Jg (x)λ =0 g (x) we obtain linear system T B (x, λ) Jg (x) Jg (x) O With suitable strategy for choosing µk , this method can be very robust in practice, and it forms basis for several effective software packages < interactive example > Michael T. Heath 60 / 74 Unconstrained Optimization Nonlinear Least Squares Constrained Optimization Equality-Constrained Optimization Levenberg-Marquardt method is another useful alternative when Gauss-Newton approximation is inadequate or yields rank deficient linear least squares subproblem Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization Scientific Computing s =− δ T f (x) + Jg (x)λ g (x) for Newton step (s, δ ) in (x, λ) at each iteration Michael T. Heath 61 / 74 Unconstrained Optimization Nonlinear Least Squares Constrained Optimization Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization Sequential Quadratic Programming Scientific Computing 62 / 74 Unconstrained Optimization Nonlinear Least Squares Constrained Optimization Merit Function Once Newton step (s, δ ) determined, we need merit function to measure progress toward overall solution for use in line search or trust region Foregoing block 2 × 2 linear system is equivalent to quadratic programming problem, so this approach is known as sequential quadratic programming Popular choices include penalty function Types of solution methods include φρ (x) = f (x) + 1 ρ g (x)T g (x) 2 Direct solution methods, in which entire block 2 × 2 system is solved directly Range space methods, based on block elimination in block 2 × 2 linear system Null space methods, based on orthogonal factorization of T matrix of constraint normals, Jg (x) and augmented Lagrangian function Lρ (x, λ) = f (x) + λT g (x) + 1 ρ g (x)T g (x) 2 where parameter ρ > 0 determines relative weighting of optimality vs feasibility Given starting guess x0 , good starting guess for λ0 can be obtained from least squares problem J T (x0 ) λ0 ∼ − f (x0 ) = < interactive example > g Michael T. Heath Scientific Computing 63 / 74 Michael T. Heath Scientific Computing 64 / 74 Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization Unconstrained Optimization Nonlinear Least Squares Constrained Optimization Inequality-Constrained Optimization Penalty Methods Merit function can also be used to convert equality-constrained problem into sequence of unconstrained problems Methods just outlined for equality constraints can be extended to handle inequality constraints by using...
View Full Document

This note was uploaded on 10/16/2011 for the course MECHANICAL 581 taught by Professor Wasfy during the Fall '11 term at IUPUI.

Ask a homework question - tutors are online