This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Jeff Linderoth IE417:Lecture 12 Quiz Discussion Jeff Linderoth IE417:Lecture 12 Motivation We are interested in determining conditions under which we can verify that a solution is optimal. For constrained problems. For a very simple example, lets assume we are minimizing functions that are One-dimensional Continuous Differentiable Recall: a function f ( x ) is convex on a set S if for all a ∈ S and b ∈ S, f ( λa + (1- λ ) b ) ≤ λf ( a ) + (1- λ ) b . Jeff Linderoth IE417:Lecture 12 Why do we care? Algorithms for nonlinear programming work to find points that satisfy these conditions When faced with a problem that you don’t know how to handle, write down the optimality conditions Often you can learn a lot about a problem, by examining the properties of its optimal solutions. Jeff Linderoth IE417:Lecture 12 (1-D) Constrained Optimization Now we consider the following problem for scalar variable x ∈ R 1 . z * = min ≤ x ≤ u f ( x ) There are three cases for where an optimal solution might be x = 0 < x < u x = u Jeff Linderoth IE417:Lecture 12 Breaking it down If < x < u , then the necessary and sufficient conditions for optimality are the same as the unconstrained case You should know these all too well! Namely, a necessary condition is that ∇ f ( x ) = 0 Jeff Linderoth IE417:Lecture 12 What if NOT < x < u If x = 0 , then we need f ( x ) ≥ (necessary), f > (sufficient) If x = u , then we need f ( x ) ≤ (necessary), f >...
View Full Document
This note was uploaded on 02/29/2008 for the course IE 417 taught by Professor Linderoth during the Spring '08 term at Lehigh University .
- Spring '08
- Systems Engineering