This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: P r e l i m i n a r y d r a f t o n l y : p l e a s e c h e c k f o r fi n a l v e r s i o n ARE211, Fall 2007 LECTURE #21: THU, NOV 8, 2007 PRINT DATE: AUGUST 21, 2007 (NPP1) Contents 6. Nonlinear Programming Problems and the KarushKuhn Tucker conditions 1 6.1. Existence and Uniqueness 3 6.2. Necessary conditions for a solution to an NPP 5 6.3. Role of the Constraint Qualification 6 6.4. Demonstration that KKT conditions are necessary 8 6. Nonlinear Programming Problems and the KarushKuhn Tucker conditions Going to look at the technique for solving the general nonlinear programming problem. We did this graphically at the beginning of the year, but we now need to do it formally. See why the calculus conditions do what they are supposed to do. The general nonlinear programming problem (NPP) is the following: maximize f ( x ) subject to g ( x ) ≤ b , where f : R n → R and g : R n → R m Terminology: • f is called the objective function ; 1 2 LECTURE #21: THU, NOV 8, 2007 PRINT DATE: AUGUST 21, 2007 (NPP1) • g is a vector of m constraint functions , and, of course, b ∈ R m . That is, the individual constraints are stacked together to form a vectorvalued function. • The set of x such that g ( x ) ≤ b is called the feasible set or constraint set for the problem. For the remainder of the course, unless otherwise notified, we will assume that both the objective function and the constraint functions are continuously differentiable. Indeed, we will in fact assume that they are as many times continuously differentiable as we could ever need. Emphasize that this setup is completely general, i.e., covers every problem you are ever likely to encounter. • can handle constraints of the form g ( x ) ≥ b ; • can handle constraints of the form x ≥ 0; • can even handle constraints of the form g ( x ) = b . For example, given u : R 2 → R , consider the problem maximize u ( x ) subject to p · x ≤ y, x ≥ 0; What’s g : in this case, g is a linear function, defined as follows: g ( x ) = p · x g 1 ( x ) = x 1 g 2 ( x ) = x 2 so that the problem can be written as maximize u ( x ) subject to G x ≤ b , where G = p 1 p 2 1 1 and b = y ARE211, Fall 2007 3 While there are many advantages to having a single, general version of the KT conditions, the generality comes at a (small) cost. When it comes to actually computing the solution to an NPP problem (as opposed to just understanding what’s going on), it’s very convenient to treat equality constraints differently from inequality constraints. I explain what you need to do on page 9. You should make sure to refer to this discussion before you start on the NPP problem set....
View
Full Document
 Fall '07
 Simon
 Optimization, Mathematical optimization, Constraint, NPP, Gradient, Level set

Click to edit the document details