Unformatted text preview: CSC2411  Linear Programming and Combinatorial Optimization Lecture 6: LP Duality, Complementary slackness, Farkas Lemma, and von Neumann minmax principle.
Notes taken by Pixing ZHANG February 17, 2005 Summary: In this lecture, we further discuss the duality of LP. We prove duality theorems, discuss the slack complementary, and prove the Farkas Lemma, which are closely related to each other. At last, we discuss an application: von Neumann MinMax theorem. 1 Primal and Dual
Recalling the question 1 from the assignment, at the ﬁrst step, we formalize it as: then we get the following LP as our primal linear programming: these variables are unconstrained We use a ﬁgure in the following to illustrate the problem. Lecture Notes for a course given by Avner Magen, Dept. of Computer Sciecne, University of Toronto. 1 3 10¦ ' # ¨ ¨ 2)(&%$"! ¦ ©§ ¦ ¥¤£¡ ¢ H G ¦0 0 I"FED 320 )9CB7B 1 # ¦6 3122%A@"¦867)5$4¦ ©§ 0# 9 6 ¨¨ ¦ ¥¤£¡ ¢ P H I9 R 0 Q 9 6 R 0 6 Q 0 R V7 Q 0 ¦ %§ 06 ¨¨ A U%# Q 5 % # R U8 9D ST¡ 0 0 (& 4 SD 6 R )D 6 Q 2
Let us deﬁne and , then our problem can be transformed to: A IHG8 AFED8 A9 [email protected] 3 1 0 I9 P0 2 H4 9 6 4 0 5 2 0 6 6 H C 4 0 2 0 6 0 H 7 4 36 2 5 ¦ ©§ 0 0 ¨¨ 0 6 %# 54 35© # 2 1 )'¡ 0 (& From our previous lectures and tutorials, we know how to get its dual as following: Figure 1: The geometry representation of question 1 in Assignment 1 ¤¥ ¤¥ ¤¥ ¤¥ ¤¥ ¤¥ ¢£ ¤¥ ¤¥ ¢£ ¤¥ ¤¥ ¢£ ¢£ ¢£ ¢£ ¢£ ¢£ ¢£ ¢£ ¢£ ¢£ ¢£ ¢£ ¡ ¢£ ¡ ¡ ¡ ¦§ ¨© ¡ ¡ ¦§ ¨© ¡ ¡ ¦§ ¨© ¡ ¡ ¦§ ¨© ¡ ¡ ¦§ ¨© ¡ ¡ ¦§ ¨© ¡ ¡ ¦§ "#¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ "¨© ¦§ ¨© 2 height of the polytope this is for variable t this is for variable b this is for variable a ¤¥ ¤¥ ¤¥ ¤¥ ¤¥ ¤¥ ¤¥ ¤¥ ¤¥ ¤¥ ¤¥ ¤¥ ¤¥ ¤¥ ¤¥ ¤¥ ¤¥ ¤ ¤¥ ¤¥ ¤ ¤¥ ¤¥ ¤ ¤¥ ¤¥ ¤ ¤¥ ¤¥ ¤ ¦§ ¨$ ! ¦§ ¨ ¦§ ¨ ¦§ ¨ ¦§ ¨ ¦§ ¨ ¦§ ¨ ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨©$% ! ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© ¦§ ¨© where is the convex hull of The meaning of the above formulation is that range over all possible pairs of points in the convex hull generated by input points, and we want to ﬁnd the convex hull’s maximal height at some point . It is easy to ﬁnd that can not be smaller than half of the height found in the dual problem. The half of the height here is a bound for the original primal problem. It would be interesting to see a simple proof which shows us the above observation. In the following section, we will discuss such proofs. 2 Weak and Strong Duality Theorem
From what we have known so far, the following ﬁgure gives us a rough idea of the possible values for primal and dual’s solutions. Primal Solution Opt Primal Is there a Gap? Opt Dual Dual Solution Figure 2: The relationship between primal and dual solutions by assuming both being feasible This ﬁgure is captured by the following theorems. Before we start to prove the theorems, we want to point out that we will prove most of our theorems for linear 3 A $# # 8 ¥0¥¨ ¨¨ A !# 0 8 " 0" 0 £0 EA©# D5 P R 0 8 ¦ A¥¤¢ ¡ ¡ ¢6 £
A Notice that © 8 ¥£ 0 § 6 ¨£ § ¤ ¦ ¥ £ ST¡ ¤ (& Q # 0 R 0 EA%FF 8 are coefﬁcients of convex combinations. This motivates us to deﬁne and , then the problem is transformed to: ¨¨ ¦ ©§ Q ¡ ¢6 programming in standard forms. But this does not prevent us from applying them to other forms, since different forms can be deﬁned by each other and the proofs can be extended too. Proof. By the construction of the primal and dual, we immediately get From the above theorem, we do not know whether there is a gap between the optimal primal solution and the optimal dual solution when both problem are feasible, and whether feasibility can be related to optimum. The following strong duality theorem tells us that such gap does not exist: Theorem 2.2. Strong Duality Theorem If an LP has an optimal solution then so does its dual, and furthermore, their optimal solutions are equal to each other. An interesting aspect of the following proof is its base on simplex algorithm. Particularly, we will utilize the property of simplex algorithm’s proof at termination. Proof. Let the primal and dual be: Assuming that our primal problem has an optimal solution, then at the termination point of simplex algorithm with a basis and the remaining columns , we have the following inequalities: 4 ' Since , we have: A &# 8 9 A 0 8 0
Theorem 2.1. Weak Duality Theorem If is feasible for the primal and is feasible for the dual, then H IG # ' ¡# A D0 # 8 )'¡ (& § © H9 A &# 8 6 ¡)9 A 0 8 0 #
¨
' ! # 6 H9 6 A F0 C ¢¤ ¡ 8 § ¥£ ¨ ¦3 ¤¢ 6 Indeed, let , then . Therefore, if the primal has an optimal solution, then we can ﬁnd a feasible solution for the dual. Since , we can get the following equation: Therefore, we can conclude that if primal has an optimal solution, the dual is feasible and has a solution with the value equal to the primal optimum. 2.1 Primal and Dual’s Possible Category
Considering the pair of primal and dual problems, our discussion up to now has informed us three different possible combinations: infeasible, feasible but unbounded, and feasible bounded. Table 1 represents their possible relationships. Primal Has Optimum (Same Optimum) Unbounded Infeasible Table 1: ”*” represents the conclusion of strong duality, ”0” represents the conclusion of week duality corollary As for the (Infeasible, Infeasible) entry, it is quite easy to see it is possible: suppose the primal is infeasible, then we write its dual and add more constraints to make it infeasible, and the primal obtained from the new infeasible dual still is infeasible. We observe that the dual of the dual problem is the primal. Therefore the table must be symmetric. 3 Complementary Slackness
For the following discussion, we will use the linear programming in canonical forms. We consider the following primal and dual: Previously we have emphasized the special roles of the inequalities that holds as equalities for a certain solution, particularly for the optimal solution. In the context of 5 H I9 # ' ¡# A D0 # 8 ST¡ (& ¤ ¡£ ¤ ¦£ Dual ¤ ¡£ ¤¥£ ¤ ¥£ A 0 8 ¡ ¢ Has Optimum Unbounded Infeasible 6 A 0 8 6 ' ¨# 6 A D0 # 8 ! H9 9 ¢¤ ¡ 6# primal and dual problems, the following theorem gives an ultimate expression of this balance for to be respective optima in the primaldual. Theorem 3.1. Complementary Slackness Let be feasible solutions for primal and dual problems respectively, then they are optimal solutions iff: Proof. Since are feasible, then, By strong duality theorem, are optima iff . Furthermore, iff all the above three terms are the same. What can we get from the above equations? Let us consider the ﬁrst equation , which can be rewritten as Since and , we can conclude that iff for all , either (whenever ), or . Similarly, the second equation can be rewritten as , from which we know that iff either (whenever ), or . 4 Linear System’s Feasibility and Farkas Lemma
Now let us discuss the question of whether a set of inequalities and equations is feasible. From our intuition,we know if we multiply the equations by any scalars and add them up, and if we can not get a valid equation, then we know the linear system is infeasible. Consider the following example: if we multiply ﬁrst equation by 1, second equation by 2, and add them up, we get 6 6 A 0 8 6 A F0 8 6 A F 8 H ¦ ¡ # 0 A &# 8 6 ¡# 0 H 6 A ) # 8 0 A D0 # 8 B ¡# 9 H 6 H § ¦ 3 ¡# 6 A 0 8 ' A U8 D A P8 9 A D0 # 8 6 A F0 8 # F0 ¨ A &0 # 8 9 ¡# AA 0 8 9 G£D ¢ ¥ 6 ¤A 8 ¢ 6 2A ¨# 8 6 ¨ ¨ H 6 A 0 ¨$ 8 # D 36 ¨ 36 D D £ £ H 6 £F%¡1 ¢#0 H 6 5FD3 1 0 G 6 ©$A ¨ 6 © 36 ¨D H9 ¡# #0 # F0 6 UD0 # 8 A H9 # F0 H 6 ¡ # ¡# A &0 # 8 Since , we know the linear system is not feasible. This example demonstrates a simple way to prove the infeasibility of a linear system. To formalize the above method, we write it as: if and , then is infeasible. Moreover, this can be strengthened to a sufﬁcient and necessary condition. That is, the reverse direction holds too: if a linear system is infeasible, then there must be a ”linear proof” to that. Farkas lemma captures this idea formally. 4.1 Farkas Lemma
Farkas Lemma Farkas Lemma was attributed to J. Farkas in 1894. It is useful to think of it as the geometric version of duality theorem. Theorem 4.1. is feasible, Proof. The direction is trivial: let be a solution to . If , then . Since , we have . Now let us consider the other direction , which is more interesting and complicated. In fact, this is not surprising as it is analogous to strong duality theorem. For a set of vectors , we deﬁne . Figure 3 illustrates the deﬁnition. v1 v2 o
Figure 3: the illustration of the cone’s deﬁnition with If we take the columns of as the vectors, then the infeasibility of is same as saying . Since , we can view as a hyperplane that go through the origin with all on one side and on the other side. Let , and let be the closest point to in . which must exist since the distance from is a continous function and is a closed set. We ﬁrst argue that for every point , . If the above claim did not hold, we could ﬁnd another point with smaller distance to than the distance to . In ﬁgure 4, we illustrate the relationships of and . 7 ¦ ¨ 9 ¡ ©6 A " ¥0¨ ¥¨¨ 0 ¨ 0 8 § £ H CA D0 # 8 9 H 9 ¡# H IFE 6 9 0 AH 9 FICA D0 # 8 ¤ H @¡# 0 1 8 ¥£ 9 # ¤ . H ¢A D0 # 8 ¡ H9 0 0 H9 ! # ¨F0 # HICA ¦"&0 8 © ' ¦ A " ¥0¨ ¥¨¨ ©¨ 0 8 § 6 0 # H¦ 9 A ICU © 0 # 8 1 I9 ¡# 3 H 6 0 ¥¨ ¨¨ 0 ¨ 0 8 § ¥ A ¨ %9 6 © D 6 which is " " ¥0¨ ¥¨¨ 0 ¨ 0 A D0 # 8 6 ¨# H 9 E 6 0 H 9 0 6 H¤ I9 ¡# H I9 H A2 A3 p z A1 z’ b Hyperplane Figure 4: the illustration of the claim and the relationship of z, p, b Let , and . In particular, for . From the above fact, we know that , we have (notice that and is a unit vector with entry equal to 1, therefore .) As a result, puts on one side. Now we need to show that b is put to the other side. We know that . Moreover, since is the closed point to in and , from previous discussion, we have . Adding these two terms together, we have: 5 Application: von Neumann minmax principle
A zero sum game is a game with 2 players, in which each player has a ﬁnite set of strategies. The payoff to the ﬁrst player is determined by the strategies chosen by both players, and the payoff to the second player is the negation of the payoff to the ﬁrst. So the sum of their payoffs is zero. The following PaperScissorStone game is a zero sum game. 8 9 ¡# which means is put to the other side of than the side of . Therefore, if is not feasible, we can ﬁnd a , such that . This ﬁnishes the proof. 8 0 H )1 9 H I9 # H 9 FE 6 0 # 0 I¡ I¢ H 0 ! 8 6 A ¦"&E! 8 6 A DEB 8 6 A D0 # 8 HA 0 0 ©H H ' AF¢ H E) 8 0 ¢ H ¡ A &E8 8 0 # ¦ H 9 3 HICA # 0 8 D3 1 ¤ H 9 A F0 ¦ 8 &1 9 0 # 03 ¦ 6 6 H 9 A # 0 ¡ ) 6 # 6 ¡ H ¡ A D0 # 8 0 H Row player Paper Stone Scissor Column Player Paper 0 1 1 Stone 1 0 1 Scissor 1 1 0 Table 2: PaperScissorStone game’s strategy matrix If the column player plays strategy , and the row player plays strategy , the payoff to the column player is . If column player plays ﬁrst, she will get proﬁt: If we reverse the order, she will get proﬁt: For the above two terms, we have: ; . So it is easy to know that there is a big advantage to play second in the above game. Now what if the player exposes a probability vector that will determine her strategies(i.e. mixed strategies). Since each player’s strategy is determined by a probability distribution, the order of playing game becomes less important. Let us deﬁne . For mixed strategy games, we compare the following two objects: Column player play ﬁrst: Column player play second: where are probability vectors and It is easy to see that . Is it possible that the equality holds? The following theorem give a positive answer Theorem 5.1. Von Neumann minmax Theorem(Principle) There exist that 9 #0 320%9 6 ¢ ¢ )&T¡ 1 ©9 6 ¢ ¢¤ ¡ 1 ( 0 0 ¢ ST¡ ¢¤ ¡ ( &¢ ¡# § ST¡ ¤ ¥¤£¡ ' ¡# ¤ ¥¤¢ ¡ § )'¡ (& ¢ (& ¢ ( ©# ¢ ¡ 6 ¡# §¥ §¥ © ¨§ ¨¤ 0 ¡# ST¡ ¢¤ ¡ (& §¥ © §¥ ¨¤ ¨¦§ 0 ¡# ¥¤¢ ¡ ST¡ (& © §¥ §¥ ¨¦§ ¨¤ §¥ © §¥ ¨¤ ¨§ ¡# )T¡ ¥¤£¡ 6 ¨# ¢¤ ¡ ST¡ (& ¢ (& ¢ (
such ¢¤ ¡ ST¡ ( &¢ 3 £ 9 6 2 ¡ 0 H 82 ¡ ¤© 2 ¨ 6 ¢ 9 ¡ ¢ #0 ...
View
Full
Document
This document was uploaded on 11/03/2009.
 Spring '09

Click to edit the document details