This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 7. GENERAL NONLINEAR PROGRAMMING In this chapter we develop necessary and sufficient conditions for optimality for the general nonlinear programming problem minimize f ( x ) subject to g i ( x ) = 0 , for i = 1 ,...,m, ( NLP ) g i ( x ) , for i = m + 1 ,...,p. 7.1 EqualityConstrained Problems Lagrange Multipliers We begin with the special case of linearequality constrained problems: ( LEP ) minimize f ( x ) subject to Ax = b. This is the case of ( NLP ) in which g ( x ) = Ax b and p = m . It provides a simple prototype of the results in this chapter. Theorem 7.1.1. Suppose f : R n R is differentiable at x and assume that A x = b . (a) If x is a local minimizer for ( LEP ) , then f ( x ) range( A T ) . (b) If f is convex and f ( x ) range( A T ) , then x is a global minimizer for ( LEP ) . Observation 7.1.2. The condition f ( x ) range( A T ) can be rewritten as f ( x ) + A T = 0 . In other words, f ( x ) + m X i =1 i a ( i ) = 0 , where A T = [ a (1) ,...,a ( m ) ] . We call i the Lagrange multiplier for the constraint a ( i ) x = b i . Proof of Theorem 7.1.1. Part (a): Assume x is a local minimizer for ( LEP ). To show f ( x ) range( A T ) = null( A ) , we must show that f ( x ) d = 0 whenever Ad = 0. (See Figure 7.1.) Consider d null( A ) and define d ( t ) = f ( x + td ). Because A ( x + td ) = A x + Ad = b + 0 = b , we see that x + td satisfies the constraints of ( LEP ) for all t . Thus t = 0 is a local minimizer for d , and hence 0 = d (0) = f ( x ) d as desired. Part (b): Assume that f is convex, A x = b , and f ( x ) = A T . By convexity, each vector x must satisfy f ( x ) f ( x )+ f ( x ) ( x x ) = f ( x )+( A T ) ( x x ) = f ( x )+ A ( x x ). If x is feasible (so Ax = b = A x ), then this implies that f ( x ) f ( x ) + 0 = f ( x ). Hence x is a global minimizer for ( LEP ). 73 f = 2 . 8 f = 3 . 4 f = 3 . 9 f = 4 . 5 f = 5 . Ax = b x f ( x ) arrownortheast Figure 7.1 Level curves of f with f ( x ) null( A ) Example 7.1.3 (Equalityconstrained linear least squares) . Consider the problem of minimizing k Cx d k subject to Ax = b , which was discussed at the end of Chapter 6. This is equivalent to minimizing f ( x ) = (1 / 2) k Cx d k 2 subject to Ax = b . We write f ( x ) out as f ( x ) = 1 2 k Cx d k 2 = 1 2 ( Cx d ) ( Cx d ) = 1 2 [ x C T Cx 2( C T d ) x + d d ] and differentiate to get f ( x ) = C T Cx C T d and Hf ( x ) = C T C . The necessary and sufficient conditions for optimality are therefore A x = b and C T ( C x d ) = A T u for some u , which recovers the result found in Section 6.5....
View
Full
Document
 Spring '12
 DouglasWard
 Linear Programming

Click to edit the document details