This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: MthSc 810: Mathematical Programming Lecture 20 Pietro Belotti Dept. of Mathematical Sciences Clemson University December 6, 2011 Reading for today: Sections 6.1, 6.2 Reading for Thursday: Sections 6.3, 6.4. Recap: Lagrangian relaxation Consider a problem z OPT = min c x s . t . A x = b d x = f x . Lagrangian relaxation applied to the last constraint yields min c x + ( d x f ) s . t . A x = b x , and a lower bound on z OPT for any R . Lagrangian function Consider the function L ( ) = min { c x + ( d x f ) : A x = b , x } = f + min { ( c + d ) x : A x = b , x } , which gives a lower bound on z OPT for any R . What does it look like? For variable c , G ( c ) = min { c x : A x = b , x } is a concave function. It is the objective function value of an LP whose objective function has parametric coefficients, just like L ( ) . Similarly, it can be proved that L ( ) is concave. A tight lower bound Because L ( ) z OPT for any R , the tightest lower bound is max {L ( ) : R } . If the relaxed constraint were d x f , then would be constrained in sign Anyway, max R L ( ) is a maximization problem of a concave function Easy! Equivalent to min {L ( ) : R } , where L ( ) is convex Minimizing L ( ) L ( ) is a convex , piecewise linear function Piecewise linear there exists no gradient However, it admits a subgradient . Consider = : L ( ) = f min { ( c + d ) x : A x = b , x } L ( ) = f ( c + d ) x , where x is the optimal solution of the relaxation with = A subgradient of L ( ) at = is f d x If f...
View
Full
Document
This note was uploaded on 03/14/2012 for the course MTHSC 810 taught by Professor Staff during the Fall '08 term at Clemson.
 Fall '08
 Staff
 Math

Click to edit the document details