chap06_8up

# Approximate solution is then given by xk1 xk sk and

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: e CG method to minimize f (x) = 0.5x2 + 2.5x2 1 2 x1 Gradient is given by f (x) = 5x2 Taking x0 = 5 1 gradient, Michael T. Heath Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization ri (xk )Hi (xk ) sk = −J T (xk )r (xk ) is solved for approximate Newton step sk at each iteration i=1 This is system of normal equations for linear least squares problem J (xk )sk ∼ −r (xk ) = m Hessian matrices Hi are usually inconvenient and expensive to compute Moreover, in Hφ each Hi is multiplied by residual component ri , which is small at solution if ﬁt of model function to data is good which can be solved better by QR factorization Next approximate solution is then given by xk+1 = xk + sk and process is repeated until convergence Michael T. Heath Scientiﬁc Computing 55 / 74 Michael T. Heath Scientiﬁc Computing 56 / 74 Unconstrained Optimization Nonlinear Least Squares Constrained Optimization Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization Example: Gauss-Newton Method Example, continued Use Gauss-Newton method to ﬁt nonlinear model function T If we take x0 = 1 0 , then Gauss-Newton step s0 is given by linear least squares problem −1 −1 0 −1 −1 s0 ∼ 0.3 −1 −2 = 0.7 0.9 −1 −3 f (t, x) = x1 exp(x2 t) to data t y 0.0 2.0 1.0 0.7 2.0 0.3 3.0 0.1 For this model function, entries of Jacobian matrix of residual function r are given by {J (x)}i,1 {J (x)}i,2 whose solution is s0 = ∂ri (x) = = − exp(x2 ti ) ∂x1 Michael T. Heath Scientiﬁc Computing 0.69 −0.61 Then next approximate solution is given by x1 = x0 + s0 , and process is repeated until convergence ∂ri (x) = = −x1 ti exp(x2 ti ) ∂x2 Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization Unconstrained Optimization Nonlinear Least Squares Constrained Optimization Michael T. Heath 57 / 74 Unconstrained Optimization Nonlinear Least Squares Constrained Optimization Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization Example, continued Scientiﬁc Computing 58 / 74 Unconstrained Optimization Nonlinear Least Squares Constrained Optimization Gauss-Newton Method, continued xk 1.000 1.690 1.975 1.994 1.995 1.995 r (xk ) 2.390 0.212 0.007 0.002 0.002 0.002 0.000 −0.610 −0.930 −1.004 −1.009 −1.010 Gauss-Newton method replaces nonlinear least squares problem by sequence of linear least squares problems whose solutions converge to solution of original nonlinear problem 2 2 If residual at solution is large, then second-order term omitted from Hessian is not negligible, and Gauss-Newton method may converge slowly or fail to converge In such “large-residual” cases, it may be best to use general nonlinear minimization method that takes into account true full Hessian matrix < interactive example > Michael T. Heath Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization Scientiﬁc Computing 59 / 74 Unconstrained Optimization Nonlinear Least Squares Constrained Optimization Michael T. Heath Optimization Problems One-Dimensional Optimization Multi-Dimensional Optimization Levenberg-Marquardt Method For equality-constrained minimization problem...
View Full Document

## This note was uploaded on 10/16/2011 for the course MECHANICAL 581 taught by Professor Wasfy during the Fall '11 term at IUPUI.

Ask a homework question - tutors are online