Linear Programming

Linear Programming - Lecture 2: Introduction to Linear...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
Lecture 2: Introduction to Linear programming Contents 1 Defnition and geometry A mathematical program is a linear program if it has (i) continuous variables (ii) one linear objective function (iii) all constraints are linear equalities or inequalities A generic linear programming (LP) problem is deFned as follows. maximize c T x , subject to a T i x b i ,i G, a T i x = b i E, a T i x b i L, x j 0 ,j P, x j 0 N. The last two sets of constraints although inequalities are special and are usually treated separately. The vector c is called the objective vector and the numbers b i , i G E L , are called the RHS coefficients. We can, of course, stack up the constraints of each kind and rewrite the above problem in the following form: maximize c T x , subject to A g x b g , A e x = b e , A l x b l , x j 0 P, x j 0 1.1 Why bother with LPs One part of the answer lies in the geometry of the feasible region of an LP. Since equalities simply state that the problem lies in a lower dimensional space, we will consider LPs with only inequality constraints, i.e. E = . Recall that a linear inequality a T x b divides the entire space into two parts, i.e. the set { x : a T x b } is a halfspace . Thus, the feasible region of an (inequality constrained) LP is an intersection of halfspaces. The set obtained by taking the intersection of halfspaces is called a polyhedron . a T 1 x b 1 a T 2 x b 2 a T 3 x b 3 a T 4 x b 4 a T 5 x b 5 P
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Introduction to Linear Programming 2 Polyhedra are often good Frst approximations to more complicated feasible sets. F L In the Fgure above, L is a linear approximation for the nonlinear feasible set F . Therefore, linear programs tend to provide a good approximation to more complicated optimization problems. Second part of the answer to why one should bother with LPs is that LPs can be solved rather efficiently. 1.2 Geometrical solution of LPs Continuing further with this geometrical approach, lets try to examine optimal solutions of LPs. Lets start with a one-dimensional LP maximize cx subject to a 1 x a 2 The optimal solution x of this LP is given by (a) c> 0: x = a 2 (b) c =0: any x [ a 1 ,a 2 ] optimal . .. in particular x = a 1 2 (c) c< 0: x = a 1 Moral: The optimal solution is alway at the boundary of the feasible set. Push this geometric approach a little further and consider general (inequality constrained) LPs. maximize c T x , subject to x ∈P (polytope) ±or a given scalar z ,these to fpo in t s x with cost c T x = z is a plane perpendicular to c (a hyperplane ). Thus, an algorithm for solving the LP is to increase z , or equivalently slide the hyperplane in the c direction, until the plane is at the boundary of the feasible region. Lets try our algorithm on the LP maximize c 1 x 1 + c 2 x 2 , such that x 1 + x 2 1 , x R 2 + The feasible region of this LP is given by P = n x : x 1 + x 2 0 ,x 1 0 2 0 o
Background image of page 2
Introduction to Linear Programming 3 x 1 x 2 c 1 c 2 c 3 c 4 The results of our algorithm on various c vectors is given by (a) c =(
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 24

Linear Programming - Lecture 2: Introduction to Linear...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online