This preview shows pages 1–4. Sign up to view the full content.
LINEAR PROGRAMMING
A Concise Introduction
Thomas S. Ferguson
Contents
1. Introduction
...............................................................
3
The Standard Maximum and Minimum Problems
...........................
4
The Diet Problem
..........................................................
5
The Transportation Problem
...............................................
6
The Activity Analysis Problem
.............................................
6
The Optimal Assignment Problem
..........................................
7
Terminology
8
2. Duality
...................................................................
10
Dual Linear Programming Problems
.......................................
10
The Duality Theorem
.....................................................
11
The Equilibrium Theorem
.................................................
12
Interpretation of the Dual
14
3. The Pivot Operation
....................................................
16
4. The Simplex Method
20
The Simplex Tableau
......................................................
20
The Pivot Madly Method
21
Pivot Rules for the Simplex Method
23
The Dual Simplex Method
................................................
26
5. Generalized Duality
28
The General Maximum and Minimum Problems
28
Solving General Problems by the Simplex Method
.........................
29
Solving Matrix Games by the Simplex Method
............................
30
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document 6. Cycling
...................................................................
33
A Modifcation oF the Simplex Method That Avoids Cycling
...............
33
7. Four Problems with Nonlinear Objective Function
...................
36
Constrained Games
.......................................................
36
The General Production Planning Problem
................................
36
Minimizing the Sum oF Absolute Values
...................................
37
Minimizing the Maximum oF Absolute Values
..............................
38
Chebyshev Approximation
................................................
39
Linear ±ractional Programming
...........................................
39
Activity Analysis to Maximize the Rate oF Return
.........................
40
8. The Transportation Problem
42
±inding a Basic ±easible Shipping Schedule
44
Checking For Optimality
...................................................
45
The Improvement Routine
47
Related Texts
...............................................................
50
2
LINEAR PROGRAMMING
1. Introduction.
A linear programming problem may be defned as the problem oF
maximizing or min
imizing a linear function subject to linear constraints
. The constraints may be equalities
or inequalities. Here is a simple example.
±ind numbers
x
1
and
x
2
that maximize the sum
x
1
+
x
2
subject to the constraints
x
1
≥
0,
x
2
≥
0, and
x
1
+2
x
2
≤
4
4
x
1
x
2
≤
12
−
x
1
+
x
2
≤
1
In this problem there are two unknowns, and fve constraints. All the constraints are
inequalities and they are all linear in the sense that each involves an inequality in some
linear Function oF the variables. The frst two constraints,
x
1
≥
0and
x
2
≥
0, are special.
These are called
nonnegativity constraints
and are oFten Found in linear programming
problems. The other constraints are then called the
main constraints
. The Function to be
maximized (or minimized) is called the
objective function
. Here, the objective Function is
x
1
+
x
2
.
Since there are only two variables, we can solve this problem by graphing the set
oF points in the plane that satisfes all the constraints (called the constraint set) and
then fnding which point oF this set maximizes the value oF the objective Function. Each
inequality constraint is satisfed by a halFplane oF points, and the constraint set is the
intersection oF all the halFplanes. In the present example, the constraint set is the fve
sided fgure shaded in ±igure 1.
We seek the point (
x
1
,x
2
), that achieves the maximum oF
x
1
+
x
2
as (
x
1
2
) ranges
over this constraint set. The Function
x
1
+
x
2
is constant on lines with slope
−
1, For
example the line
x
1
+
x
2
= 1, and as we move this line Further From the origin up and to
the right, the value oF
x
1
+
x
2
increases. ThereFore, we seek the line oF slope
−
1tha
ti
s
Farthest From the origin and still touches the constraint set. This occurs at the intersection
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview. Sign up
to
access the rest of the document.
This note was uploaded on 02/25/2010 for the course ESI 6314 taught by Professor Vladimirlboginski during the Fall '09 term at University of Florida.
 Fall '09
 VLADIMIRLBOGINSKI

Click to edit the document details