This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: AMSC/CMSC 660 Scientific Computing I Fall 2006 UNIT 4: Nonlinear Systems and the Homotopy Method Dianne P. O’Leary c ° 2002, 2004, 2006 The problem Given a function F : R n → R n , find a point x ∈ R n such that F ( x ) = . Note: The onedimensional case ( n = 1 ) is covered in CMSC/AMSC 460. The best software for this problem is some variant on Richard Brent’s zeroin , available in Netlib. In Matlab, it is called fzero . We’ll assume from now on that n > 1 . Goals To develop algorithms for solving nonlinear systems of equations. Note: Solving nonlinear equations is a close kin to solving optimization problems. • Easier than optimization, since a “local solution” is just fine. • Harder than optimization, since there is no natural merit function f ( x ) to measure our progress. Important note: If F is a polynomial in the variables x , then use special purpose software that enables you to find all of the solutions reliably. Example of a polynomial system: x 2 y 3 + xy = 2 2 xy 2 + x 2 y + xy = Pointers: Watson’s homotopy method; Traub’s software. 1 What we know We already know a lot about solving nonlinear equations. The main tools are Newton’s method and Newtonlike methods . Differences from the methods we have studied: • Instead of the Hessian matrix , we have the Jacobian matrix of first derivatives: J ik = ∂F i ∂x k . This matrix is generally not symmetric . • Line searches are more difficult to guide, since we can’t measure progress using the function f ( x ) . Some attempts have been made to use k F ( x ) k as a merit function, but there are difficulties with this approach. The Plan • Newtonlike methods (for easy problems) • Globallyconvergent homotopy methods (for hard problems) • A case study: polynomial equations Newtonlike methods • applied to nonlinear least squares • applied to nonlinear equations Reference: C. T. Kelley’s book Nonlinear least squares 2 Note that we can solve F ( x ) = by solving min x k F ( x ) k 2 2 using any of our methods from the previous unit, looking for a point that gives a function value of zero. Advantages: • Uses all of our old machinery. • Generalizes to overdetermined systems in which the number of equations is greater than the number of variables. Disadvantage: Derivatives are rather expensive: if f ( x ) = k F ( x ) k 2 , then g ( x ) = 2 J ( x ) T F ( x ) H ( x ) = 2 J ( x ) T J ( x ) + Z ( x ) where Z ( x ) involves 2nd derivatives of F . Newtonlike methods for nonlinear equations Recall our general scheme for function minimization : Until x ( k ) is a good enough solution, Find a downhill search direction p ( k ) . Set x ( k +1) = x ( k ) + α k p ( k ) , where α k is a scalar chosen to guarantee that progress is made....
View
Full
Document
This note was uploaded on 02/05/2008 for the course CMSC 660 taught by Professor Oleary during the Fall '06 term at Maryland.
 Fall '06
 oleary

Click to edit the document details