{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Historical_notes_on_LP

# Historical_notes_on_LP - 6 Linear Programming A linear...

This preview shows pages 1–2. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 6 Linear Programming A linear programming problem (LP) is to optimize ( min or max ) a linear function over a polyhedron, e.g., min { cx : Ax ≥ b } . The linear function cx is termed the objective func- tion and the polyhedron { x : Ax ≥ b } is the set of feasible solutions for this LP, i.e., its feasible region . While we have not yet dealt directly with such optimization models, they have been implicit in much of our development on polyhedral theory. For example, when a valid inequality cx ≥ δ determines a nonempty face F of polyhedron P = { x : Ax ≥ b } , we have cx ≥ cz ∀ x ∈ P, ∀ z ∈ F , so that any z ∈ F solves the LP min { cx : Ax ≥ b } . The same is true for any objective function cx for which c ∈ C F . Thus the nonempty faces of P are precisely the solution sets for LPs over P , and it is therefore reasonable to expect that polyhedral theory will provide important insight into linear programming models. Historical Comments The following appears in the paper by Liebling, Prodon, and Trotter in the UNESCO volume Encyclopedia of Life Support Systems (2002)249–320. See also the following references: Gale, The Theory of Linear Economic Models (McGraw-Hill, 1960); Dantzig, Linear Programming and Extensions (Princeton University Press, 1963); Schrijver, Theory of Linear and Integer Programming (Wiley, 1986). Intensive study of linear programming per se began only at the midpoint of the past cen- tury, even though the theoretical foundations for linear systems and polyhedra were laid over a century ago. Indeed, certain optimization models related to linear spaces have been well understood for two centuries. For example, recall that least squares approximation seeks an element of a subspace which lies at minimum distance from a given point; i.e., for A ∈ IR m × n , b ∈ IR m and subspace S = { Ax : x ∈ IR n } , we consider the minimization prob- lem min x bardbl Ax- b bardbl . The solution (Exercise 1.43) dates to Legendre(1805) and Gauss(1809). An alternative solution method, iterative in nature, was proposed in 1823 by Gauss, who evidently found the procedure very elementary – so mindless, in fact, that he could do the computation while half-asleep or thinking about other things (“ ...l¨asst sich halb im Schlafe ausf¨uhren, oder man kann w¨arend desselben an andere Dinge denken.”). Later, Fourier(1826) considered the same problem, but with a different norm, min x bardbl Ax- b bardbl ∞ ; i.e., find x which minimizes the largest component magnitude of Ax- b . His formulation is apparently the first linear programming model: min { λ :- λ ≤ ∑ j a ij x j- b i ≤ λ ∀ i } . It is significant that the method of solution he suggested is essentially the same as that most commonly used today. He proposed vertex-to-vertex descent along edges of the polyhedron of feasible solutions (“ ...on continue de descendre suivant une seconde arˆ ete jusqu’`a un nou- veau sommet ...”) until attaining the minimum (“ ...au point le plus bas du poly`veau sommet ....
View Full Document

{[ snackBarMessage ]}

### Page1 / 8

Historical_notes_on_LP - 6 Linear Programming A linear...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online