08. NLP (OR Models) - Lecture 8 Nonlinear Programming...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming Examples
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
In LP . linear  and  the problems are “ easy   to solve. Many real-world engineering and business problems  have  nonlinear  elements and are  hard  to solve.   Nonlinear Optimization
Background image of page 2
Minimize f ( x ) s.t. g i ( x ) ( , , =) b i , i = 1,…, m x = ( x 1 ,…, x n ) is the n -dimensional vector of decision variables f ( x ) is the objective function g i ( x ) are the constraint functions b i are fixed known constants General NLP
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Example 1 Max   f   ( x ) = 3 x 1  + 2 x 2 4 s.t.    x 1  +  x 2    1,   x 1    0,  x 2  unrestricted 2 Examples  2  and  3  can be reformulated as LPs Example 2 Max   f   ( x ) =  e c 1 x e c 2 x   e c n x n s.t.    Ax  =  b x     0   n Example 3 Min =1    f ( x j   ) s.t.    Ax  =  b x     0   where each  f j ( x j   )   is of the form Problems with “decreasing efficiencies” f j ( x j ) x j Examples of NLPs
Background image of page 4
Max   f ( x 1 x 2 ) =  x 1 x 2   s.t. 4 x 1  +  x 2   8 x 0,  x 2 0 2 8 f ( x ) = 2 f ( x ) = 1 x 2 Optimal solution will lie on the line g ( x ) = 4 x 1 + x 2  – 8 = 0. x 1 NLP Graphical Solution Method
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Solution is  not  a vertex of feasible region.  For this  particular problem the solution is on the  boundary  of the  feasible region.   This is  not  always the case. In a more general case,  f   ( x 1 x 2 ) =  μ∇ g   ( x 1 x 2 ) with  μ     0 .  ( In this case,   = 1.) Solution Characteristics Gradient of  f   ( x ) =  f   ( x 1 x 2 )    ( f / x 1 f / x 2 ) T This gives   f / x 1  =  x 2 ,   f / x 2  =  x 1 and           g / x 1  = 4,   g / x 2  = 1 At optimality we have  f   ( x 1 x 2 ) =  g  = 4
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 12/19/2011 for the course M E 366l taught by Professor Staff during the Spring '08 term at University of Texas at Austin.

Page1 / 24

08. NLP (OR Models) - Lecture 8 Nonlinear Programming...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online