introduction-lp-duality1

Actually if one of both problems admits an optimal

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: = −0.6 − 0.2x1 + 0.2x5 + 0.6x1 + 0.4x5 − x1 + 0.2x1 − 0.2x5 + 0.6x6 + 0.2x6 − x6 + 0.4x6 This strategy is known as the simplex method with two phases. During the first phase, we set and solve the auxiliary problem. If the optimal value is null, we do the second phase consisting in solving the original problem. Otherwise, the original problem is not feasible. 9.3 Duality of linear programming Any maximization linear programme has a corresponding minimization problem called the dual problem. Any feasible solution of the dual problem gives an upper bound on the optimal value of the initial problem, which is called the primal. Reciprocally, any feasible solution of the primal provides a lower bound on the optimal value of the dual problem. Actually, if one of both problems admits an optimal solution, then the other problem does as well and the optimal solutions match each other. This section is devoted to this result also known as the Duality Theorem. Another interesting application of the dual problem is that, in some problems, the variables of the dual have some useful interpretation. 9.3.1 Motivations: providing upper bounds on the optimal value A way to quickly estimate the optimal value of a maximization linear programme simply consists in computing a feasible solution whose value is sufficiently large. For instance, let us consider the following problem formulated in Problem 9.4. The solution (0, 0, 1, 0) gives us a lower bound of 5 for the optimal value z∗ . Even better, we get z∗ ≥ 22 by considering the solution (3, 0, 2, 0). Of course, doing so, we have no way to know how close to the optimal value the computed lower bound is. Problem 9.4. Maximize Subject to: 4x1 + x2 + 5x3 + 3x4 x1 − x2 − x3 + 3x4 5x1 + x2 + 3x3 + 8x4 −x1 + 2x2 + 3x3 − 5x4 x1 , x2 , x3 , x4 ≤ ≤ ≤ ≥ 1 55 3 0 The previous approach provides lower bounds on the optimal value. However, this intuitive method is obviously less efficient than the Simplex Method and this approach provides no clue about the optimality (or not) of the obtained solution. To do so, it is interesting to have upper bounds on the optimal value. This is the main topic of this section. 9.3. DUALITY OF LINEAR PROGRAMMING 139 How to get an upper bound for the optimal value in the previous example? A possible approach is to consider the constraints. For instance, multiplying the second constraint by 5 , 3 we get that z∗ ≤ 275 . Indeed, for any x1 , x2 , x3 , x4 ≥ 0: 3 25 5 40 5 x1 + x2 + 5x3 + x4 = (5x1 + x2 + 3x3 + 8x4 ) × 3 3 3 3 5 275 ≤ 55 × = 3 3 4x1 + x2 + 5x3 + 3x4 ≤ In particular, the above inequality is satisfied by any optimal solution. Therefore, z∗ ≤ 275 . 3 Let us try to improve this bound. For instance, we can add the second constraint to the third one. This gives, for any x1 , x2 , x3 , x4 ≥ 0: 4x1 + x2 + 5x3 + 3x4 ≤ 4x1 + 3x2 + 6x3 − 3x4 ≤ (5x1 + x2 + 3x3 + 8x4 ) + (−x1 + 2x2 + 3x3 − 5x4 ) ≤ 55 + 3 = 58 Hence, z∗ ≤ 58. More formally, we...
View Full Document

Ask a homework question - tutors are online