This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Optimization: deterministic and stochastic approaches Course notes to the lectures of ENERGY 284 Jef Caers Associate Professor of Energy Resources Engineering Stanford University Version 2007 1 Introduction: the use of optimization in Engineering and the Earth Sciences . Most engineering disciplines require optimization. In fact, one could state that the core of any engineering problem is to solve a problem with the least cost, in the minimum amount of time, making optimal use of resources and minimize failure. Examples are optimization of profitmaking, minimization of product development / exploration cost. Minimizing failure and risk. In this course we give a relatively broad overview of the mathematical tools that solve optimization problems and apply them in a class project, which involves a typical problem in the Earth Sciences. The aim is that at the end of this course, you will have armed yourself with the relevant mathematics, both theoretical and numerical, so that you can apply it to your own research area or in future applications in your professional life. One of the most common optimization problems in the earth sciences are inverse problems. And this is the topic of your class project. The idea is that, on a practical case study basis, you will learn the (many) pitfalls that exist in solving an inverse problem. 2 1. Methods for finding zeros of univariate functions Most practical optimization problems involve many parameters/variables that need to be optimized. In general, we need to solve a multidimensional optimization problem and the multivariate function that we need to optimize may be continuous or discontinuous. There is no simple single universal solution to optimization problems. Each problem requires its own technique; some problem may require a hybridization of several techniques. However, the backbone of many optimization routines consists of the optimization of a onedimensional continuous function. In fact, many multivariate problems are solved by sequentially solving univariate problems. We will take some time in this section to review some elementary techniques to find the minimum of a derivable function. The problem at first hand seems simple enough: find a zero of a function within a given precision . It turns out the ultimate algorithm that we will find is not that simple. Consider first the simplest problem of unconstrained minimization of a univariate function g : minimize g ( x ) R x If g is twicecontinuously differentiable, and a local minimum of g exists at a point x*, two conditions hold at x* Necessary conditions Sufficient conditions 1) g' ( x* )=0 1) g' ( x* )=0 2) g" ( x* ) 2) g" ( x* )>0 So, the problem of finding a minimum is reduced to the problem of finding a zero: f ( x *)=0= g '( x *) Numerically, one never finds exactly a zero, only a small interval is formed such that ( 29 ( 29 < b f a f < b a zero = any point within [a, b], hence we say that the zero is "bracketed"....
View
Full
Document
This note was uploaded on 01/24/2011 for the course ERE 284 taught by Professor . during the Spring '10 term at Stanford.
 Spring '10
 .

Click to edit the document details