Simplex Method
(algorithm explained step by step)
Xin Li, Department of Mathematics, University of Central Florida
In linear programming problems, it is intuitive to see that the maximum and minimum are attained at
one of the vertices of the feasible set (which is determined by linear equalities and inequalities). So, an
oversimplified solution is given by just saying “evaluate the linear objective function at all the vertices (a
finite set of points) and pick up the largest as the maximum value and smallest as the minimum value.”
This is correct in theory but when the number of variables becomes large and when the feasible set is
determined by many equalities and inequalities, it will be very time consuming to find all vertices. To
give an efficient way to “go through” the vertices is the main goal of the simplex method. On average,
we do not have to go through all the vertices to realize that we have reached a maximum (or minimum)
point (even though in the worst case – there are explicit examples – we have to exhaust all vertices).
We will go through an example step by step first and then summarize our procedure as an algorithm
(that carries out the simplex method).
Consider the following example. Maximize
25ݔ
ଵ
30ݔ
ଶ
subject to
20ݔ
ଵ
30ݔ
ଶ
690
5ݔ
ଵ
4ݔ
ଶ
120
ݔ
ଵ
,ݔ
ଶ
0
Step 0. Put the constraints into the 2
nd
primal form by introducing the slack variables:
20ݔ
ଵ
30ݔ
ଶ
ݕ
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '10
 Staff
 Math, Harry Connick, Jr., independent variables, dependent variables

Click to edit the document details