Elementary Linear Algebra 6e - Larson, Edwards, Falvo - Chapter 9.4

# Elementary Linear Algebra 6e - Larson, Edwards, Falvo - Chapter 9.4

This preview shows pages 1–3. Sign up to view the full content.

9.4 THE SIMPLEX METHOD: MINIMIZATION In Section 9.3, the simplex method was applied only to linear programming problems in standard form where the objective function was to be maximized . In this section, this pro- cedure will be extended to linear programming problems in which the objective function is to be minimized . A minimization problem is in standard form if the objective function is to be minimized, subject to the constraints where and The basic procedure used to solve such a problem is to convert it to a maximization problem in standard form, and then apply the simplex method as discussed in Section 9.3. In Example 5 in Section 9.2, geometric methods were used to solve the following mini- mization problem. Minimization Problem: Find the minimum value of Objective function subject to the following constraints where and The first step in converting this problem to a maximization prob- lem is to form the augmented matrix for this system of inequalities. To this augmented matrix, add a last row that represents the coefficients of the objective function, as follows. Next, form the transpose of this matrix by interchanging its rows and columns. Note that the rows of this matrix are the columns of the first matrix, and vice versa. Finally, interpret the new matrix as a maximization problem as follows. (To do this, introduce new variables, y 1 , y 2 , and y 3 .) This corresponding maximization problem is called the dual of the original minimization problem. 3 60 60 . . . 300 12 6 . . . 36 10 30 . . . 90 : : . . . : 0.12 0.15 . . . 0 4 3 60 12 10 . . . 0.12 60 6 30 . . . 0.15 : : : . . . : 300 36 90 . . . 0 4 x 2 0. x 1 0 10 x 1 1 30 x 2 90 12 x 1 1 6 x 2 36 60 x 1 1 60 x 2 300 w 5 0.12 x 1 1 0.15 x 2 b i 0. x i 0 a m 1 x 1 1 a m 2 x 2 1 . . . 1 a mn x n b m . . . a 21 x 1 1 a 22 x 2 1 . . . 1 a 2 n x n b 2 a 11 x 1 a 12 x 2 1 . . . 1 a 1 n x n b 1 1 . . . 1 c n x n w 5 c 1 x 1 1 c 2 x 2 546 CHAPTER 9 LINEAR PROGRAMMING Constraints

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Dual Maximization Problem: Find the maximum value of Dual objective function subject to the constraints where and As it turns out, the solution of the original minimization problem can be found by applying the simplex method to the new dual problem, as follows. Basic y 1 y 2 y 3 s 1 s 2 b Variables 60 12 10 1 0 0.12 s 1 Departing 60 6 30 0 1 0.15 s 2 00 0 Entering Basic y 1 y 2 y 3 s 1 s 2 b 10 y 1 0 6 20 11 s 2 Departing 02 4 40 5 0 Entering Basic y 1 y 2 y 3 s 1 s 2 b y 1 01 y 3 2 03 2 ↑↑ x 1 x 2 So, the solution of the dual maximization problem is This is the same value that was obtained in the minimization problem given in Example 5, Section 9.2. The x -values corresponding to this optimal solution are obtained from the entries in the bottom row corresponding to slack variable columns. In other words, the optimal solution occurs when The fact that a dual maximization problem has the same solution as its original minimization problem is stated formally in a result called the von Neumann Duality Principle, after the American mathematician John von Neumann (1903–1957).
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 02/24/2012 for the course MATH 310 taught by Professor Staff during the Spring '08 term at VCU.

### Page1 / 11

Elementary Linear Algebra 6e - Larson, Edwards, Falvo - Chapter 9.4

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online