p1039GNF07

p1039GNF07 - EE103 Fall 2007 Lecture Notes (SEJ) Section 9...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
EE103 Fall 2007 Lecture Notes (SEJ) Section 9 SECTION 9: INTRODUCTION TO OPTIMIZATION: THE GAUSS-NEWTON METHOD ...................................................................................................................... 135 Philosophy of Unconstrained Nonlinear Optimization ......................................... 135 Newton Method for Optimization ........................................................................... 137 Newton’s Method Applied to Nonlinear Least-squares .................................... 139 A Modification: The Gauss-Newton Method ..................................................... 140
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
135 SECTION 9: INTRODUCTION TO OPTIMIZATION: THE GAUSS-NEWTON METHOD This Section introduces the non-linear least squares problem. This problem arises quite often in engineering and science when confronted with determining parameters for a given model of a physical process and for which there is good deal of data. Moreover, as we’ll see, this type of problem is a difficult optimization problem is not generally easily solved. To make the above more specific, the following example should be helpful. A production process that leads to an output, O , is the result of using various quantities of the factors of production, 12 ,,, n x xx " . It is believed that the “production function” is of the form 01 2 n a aa n Oa xx x = " Due to observation, we have at hand the following known values 1, 2, , ,, , 1 , , jj j n j Ox x x j m = "" We wish to use this information to provide estimates of the model’s parameters, 0 n " . To this end, we choose the least-squares philosophy ; we choose to attempt to solve the optimization problem 2 01 , 2 , , 1 min ( ) n m a jj n j j a j ax x x O = " (1) Typically, mn > . Before we approach this type of problem, we need some general concepts of nonlinear optimization. Philosophy of Unconstrained Nonlinear Optimization Consider the general unconstrained optimization problem min ( ) n x gx x R (2) We’ll assume that g is sufficiently differentiable, whenever we need derivatives. The general idea is to start with a point, 0 x , and then decide whether it’s optimal and, if not,
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 10/22/2009 for the course EE 103 taught by Professor Vandenberghe,lieven during the Fall '08 term at UCLA.

Page1 / 7

p1039GNF07 - EE103 Fall 2007 Lecture Notes (SEJ) Section 9...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online