This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Jeff Linderoth IE417:Lecture 12 Quiz Discussion Jeff Linderoth IE417:Lecture 12 Motivation We are interested in determining conditions under which we can verify that a solution is optimal. For constrained problems. For a very simple example, lets assume we are minimizing functions that are Onedimensional Continuous Differentiable Recall: a function f ( x ) is convex on a set S if for all a S and b S, f ( a + (1 ) b ) f ( a ) + (1 ) b . Jeff Linderoth IE417:Lecture 12 Why do we care? Algorithms for nonlinear programming work to find points that satisfy these conditions When faced with a problem that you dont know how to handle, write down the optimality conditions Often you can learn a lot about a problem, by examining the properties of its optimal solutions. Jeff Linderoth IE417:Lecture 12 (1D) Constrained Optimization Now we consider the following problem for scalar variable x R 1 . z * = min x u f ( x ) There are three cases for where an optimal solution might be x = 0 < x < u x = u Jeff Linderoth IE417:Lecture 12 Breaking it down If < x < u , then the necessary and sufficient conditions for optimality are the same as the unconstrained case You should know these all too well! Namely, a necessary condition is that f ( x ) = 0 Jeff Linderoth IE417:Lecture 12 What if NOT < x < u If x = 0 , then we need f ( x ) (necessary), f > (sufficient) If x = u , then we need f ( x ) (necessary), f >...
View Full
Document
 Spring '08
 Linderoth
 Systems Engineering

Click to edit the document details