{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

mthsc810-lecture03

# mthsc810-lecture03 - MthSc 810 Mathematical Programming...

This preview shows pages 1–2. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: MthSc 810: Mathematical Programming Lecture 3 Pietro Belotti Dept. of Mathematical Sciences Clemson University September 1, 2011 Reading for today: Sections 1.4, 2.1, 2.2 Reading for Sep. 6: Sections 2.3-2.7 Convex problems Def.: An optimization problem is convex if ◮ the objective function is convex ◮ all constraints are convex Convex optimization problems are easy : If a problem P is convex, a local optimum x ⋆ of P is also a global optimum of P . (Hint) When modeling an optimization problem, it would be good if we found a convex problem. Nonconvex problems z o pt = min f ( x ) s . t . f i ( x ) ≤ ∀ i = 1 , 2 . . . , m What if either of f i , i = , 1 . . . , m is not convex? In order to solve them, ◮ We can aim for a feasible solution , whose objective function value is an upper bound z ub ≥ z opt. ◮ We can obtain a convex relaxation e.g. by eliminating the nonconvex constraints ⇒ we get an lower bound z lb ≤ z opt. Linear Optimization Linear optimization problems are convex . min c ⊤ x s . t . A x = b x ≥ ◮ f ( x ) = x and g ( x ) =- x are both convex functions: f ( α x ′ + ( 1- α ) x ′′ ) = α x ′ + ( 1- α ) x ′′ = α f ( x ′ ) + ( 1- α ) f ( x ′′ ) g ( α x ′ + ( 1- α ) x ′′ ) =- ( α x ′ + ( 1- α ) x ′′ ) = α g ( x ′ ) + ( 1- α ) g ( x ′′ ) ⇒ c ′ f ( x ) and c ′ g ( x ) are convex too for c ≥ ◮ The objective function is a sum of convex functions...
View Full Document

{[ snackBarMessage ]}

### Page1 / 6

mthsc810-lecture03 - MthSc 810 Mathematical Programming...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online