This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Algorithms NonLecture J: Linear Programming Algorithms Simplicibus itaque verbis gaudet Mathematica Veritas, cum etiam per se simplex sit Veritatis oratio. [And thus Mathematical Truth prefers simple words, because the language of Truth is itself simple.] Tycho Brahe (quoting Seneca (quoting Euripides)) Epistolarum astronomicarum liber primus (1596) When a jar is broken, the space that was inside Merges into the space outside. In the same way, my mind has merged in God; To me, there appears no duality. Sankara, VivekaChudamani (c. 700), translator unknown J Linear Programming Algorithms ? In this lecture, well see a few algorithms for actually solving linear programming problems. The most famous of these, the simplex method , was proposed by George Dantzig in 1947. Although most variants of the simplex algorithm performs well in practice, no simplex variant is known to run in subexponential time in the worst case. However, if the dimension of the problem is considered a constant, there are several linear programming algorithms that run in linear time. Ill describe a particularly simple randomized algorithm due to Raimund Seidel. My approach to describing these algorithms will rely much more heavily on geometric intuition than the usual linearalgebraic formalism. This works better for me, but your mileage may vary. For a more traditional description of the simplex algorithm, see Robert Vanderbeis excellent textbook Linear Programming: Foundations and Extensions [ Springer, 2001 ] , which can be freely downloaded (but not legally printed) from the authors website. J.1 Bases, Feasibility, and Local Optimality Consider the canonical linear program max { c x  Ax b , x } , where A is an n d constraint matrix, b is an ndimensional coefficient vector, and c is a ddimensional objective vector. We will interpret this linear program geometrically as looking for the lowest point in a convex polyhedron in IR d , described as the intersection of n + d halfspaces. As in the last lecture, we will consider only nondegenerate linear programs: Every subset of d constraint hyperplanes intersects in a single point; at most d constraint hyperplanes pass through any point; and objective vector is linearly independent from any d 1 constraint vectors. A basis is a subset of d constraints, which by our nondegeneracy assumption must be linearly independent. The location of a basis is the unique point x that satisfies all d constraints with equality; geometrically, x is the unique intersection point of the d hyperplanes. The value of a basis is c x , where x is the location of the basis. There are precisely n + d d bases. Geometrically, the set of constraint hyperplanes defines a decomposition of IR d into convex polyhedra; this cell decomposition is called the arrangement of the hyperplanes. Every subset of d hyperplanes (that is, every basis) defines a vertex of this arrangement (the location of the basis). I will use the words vertex and basis interchangeably.this arrangement (the location of the basis)....
View
Full
Document
This note was uploaded on 12/15/2009 for the course 942 cs taught by Professor A during the Spring '09 term at University of Illinois at Urbana–Champaign.
 Spring '09
 A

Click to edit the document details