This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Algorithms Lecture 26: Linear Programming Algorithms [ Fa’10 ] Simplicibus itaque verbis gaudet Mathematica Veritas, cum etiam per se simplex sit Veritatis oratio. [And thus Mathematical Truth prefers simple words, because the language of Truth is itself simple.] — Tycho Brahe (quoting Seneca (quoting Euripides)) Epistolarum astronomicarum liber primus (1596) When a jar is broken, the space that was inside Merges into the space outside. In the same way, my mind has merged in God; To me, there appears no duality. — Sankara, VivekaChudamani (c. 700), translator unknown 26 Linear Programming Algorithms ? In this lecture, we’ll see a few algorithms for actually solving linear programming problems. The most famous of these, the simplex method , was proposed by George Dantzig in 1947. Although most variants of the simplex algorithm performs well in practice, no deterministic simplex variant is known to run in subexponential time in the worst case. 1 However, if the dimension of the problem is considered a constant, there are several linear programming algorithms that run in linear time. I’ll describe a particularly simple randomized algorithm due to Raimund Seidel. My approach to describing these algorithms will rely much more heavily on geometric intuition than the usual linearalgebraic formalism. This works better for me, but your mileage may vary. For a more traditional description of the simplex algorithm, see Robert Vanderbei’s excellent textbook Linear Programming: Foundations and Extensions [ Springer, 2001 ] , which can be freely downloaded (but not legally printed) from the author’s website. 26.1 Bases, Feasibility, and Local Optimality Consider the canonical linear program max { c · x  Ax ≤ b , x ≥ } , where A is an n × d constraint matrix, b is an ndimensional coefficient vector, and c is a ddimensional objective vector. We will interpret this linear program geometrically as looking for the lowest point in a convex polyhedron in R d , described as the intersection of n + d halfspaces. As in the last lecture, we will consider only nondegenerate linear programs: Every subset of d constraint hyperplanes intersects in a single point; at most d constraint hyperplanes pass through any point; and objective vector is linearly independent from any d 1 constraint vectors. A basis is a subset of d constraints, which by our nondegeneracy assumption must be linearly independent. The location of a basis is the unique point x that satisfies all d constraints with equality; geometrically, x is the unique intersection point of the d hyperplanes. The value of a basis is c · x , where x is the location of the basis. There are precisely n + d d bases. Geometrically, the set of constraint hyperplanes defines a decomposition of R d into convex polyhedra; this cell decomposition is called the arrangement of the hyperplanes. Every subset of d hyperplanes (that is, every basis) defines a vertex of this arrangement (the location of the basis). I will use the words ‘vertex’ and ‘basis’ interchangeably.this arrangement (the location of the basis)....
View
Full Document
 Spring '11
 Smith
 Linear Programming, Optimization, linear program, Local Optimality, d hyperplanes

Click to edit the document details