• Notes
• 25

This preview shows pages 1–3. Sign up to view the full content.

AAS 03-261 GENERALIZED GRADIENT SEARCH AND NEWTON’S METHODS FOR MULTILINEAR ALGEBRA ROOT-SOLVING AND OPTIMIZATION APPLICATIONS James D. Turner, Ph.D ABSTRACT A standard problem in optimization involves solving for the roots of nonlinear functions defined by f(x) = 0, where x is the unknown variable. Classical algorithms consist of first-order gradient search and Newton-Raphson methods. This paper generalizes the Newton-Raphson Method for multilinear algebra root-finding problems by introducing a non-iterative multilinear reversion of series approximation. The series solution is made possible by introducing an artificial independent variable through an embedding process. Automatic Differentiation techniques are defined for evaluating the generalized iteration algorithms. Operator-overloading strategies use hidden tools for redefining the computers intrinsic mathematical operators and library functions for building high-order sensitivity models. Exact partial derivatives are computed for first though fourth order, where the numerical results are accurate to the working precision of the machine. The analyst is completely freed from having to build, code, and validate partial derivative models. Accelerated convergence rates are demonstrated for scalar and vector root-solving problems. An integrated generalized gradient search and Newton-Raphson algorithm is presented for rapidly optimizing the very challenging classical Rosenbrock’s Banana function. The integration of generalized algorithms and automatic differentiation is expected to have broad potential for impacting the design and use of mathematical programming tools for knowledge discovery applications in science and engineering. INTRODUCTION Many applications in engineering, science, and mathematics require solutions for equations of the form 0 ) ( x f , where x is the unknown root of the equation. Given an initial guess, x , the classical Newton-Raphson strategy seeks a correction, , by assuming that the function x f can be expanded as the following Taylor series ! 3 / ) ( ' ' ' ! 2 / ) ( ' ' ) ( ' ) ( 0 ) ( 3 2 x f x f x f x f x f (1) where ' denotes the derivative with respect to x. Eliminating terms of O( 2 ) and higher, provides the following correction model ) ( ' / ) ( x f x f for the improved root guess given by x:= x + . Under well-defined conditions for f(x) and its derivatives the solution accuracy is improved by repeating the procedure until |f(x)| < , where is a prescribed solution tolerance. Even though each iteration doubles the number of accurate digits, many algorithms have been proposed for accelerating the convergence rate 15 . All of these algorithms retain O( 2 ) and higher terms in Eq. (1), and generate higher-order approximations for 1

This preview has intentionally blurred sections. Sign up to view the full version.

that cancel errors through a specified approximation order. Scalar equations have been successfully handled, unfortunately vector and higher-order algebraic problems have met with limited success.
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern