Advanced Algorithms 2.2

# Advanced Algorithms 2.2 - 11.2 Size of the Output In order...

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 11.2 Size of the Output In order to even hope to solve a linear program in polynomial time, we better make sure that the solution is representable in size polynomial in L. We know already that if the LP is feasible, there is at least one vertex which is an optimal solution. Thus, when nding an optimal solution to the LP, it makes sense to restrict our attention to vertices only. The following theorem makes sure that vertices have a compact representation. Theorem 15 Let x be a vertex of the polyhedron de ned by Ax = b; x 0. Then, ! p1 p2 : : : pn ; = q q q where pi i = 1; : : : ; n; q 2 N; xT 0 pi 2L 1 q 2L: and Proof: Since x is a basic feasible solution, 9 a basis B such that xB = A,1b and xN = 0. B Thus, we can set pj = 0, 8 j 2 N , and focus our attention on the xj 's such that j 2 B . We know by linear algebra that xB = A,1b = det1A cof AB b B B where cof AB is the cofactor matrix of AB . Every entry of AB consists of a determinant of some submatrix of A. Let q = jdetABj, then q is an integer since AB has integer components, q 1 since AB is invertible, and q detmax 2L. Finally, note that pB = qxB = jcof AB bj, thus pi Pm jcof AB ij jjbj j m detmax bmax 2L. j =1 12 Complexity of linear programming In this section, we show that linear programming is in NP co-NP. This will follow from duality and the estimates on the size of any vertex given in the previous section. Let us de ne the following decision problem: De nition 8 LP Input: Integral A, b, c, and a rational number , Question: Is minfcT x : Ax = b, x 0g ? LP-21 If the linear program is feasible and bounded, the certi cate" for veri cation of instances for which minfcT x : Ax = b; x 0g is a vertex x0 of fAx = b; x 0g s.t. cT x0 . This vertex x0 always exists since by assumption the minimum is nite. Given x0, it is easy to check in polynomial time whether Ax0 = b and x0 0. We also need to show that the size of such a certi cate is polynomially bounded by the size of the input. This was shown in section 11.2. If the linear program is feasible and unbounded, then, by strong duality, the dual is infeasible. Using Farkas' lemma on the dual, we obtain the existence of x: Ax = 0, ~ ~ T x = ,1 0. Our certi cate in this case consists of both a vertex of x 0 and c ~ ~ fAx = b, x 0g to show feasiblity and a vertex of fAx = 0, x 0, cT x = ,1g to show unboundedness if feasible. By choosing a vertex x0 of fAx = 0, x 0, cT x = ,1g, we insure that x0 has polynomial size again, see Section 11.2. This proves that LP 2 NP. Notice that when the linear program is infeasible, the answer to LP is no", but we are not responsible to o er such an answer in order to show LP 2 NP. Secondly, we show that LP 2 co-NP, i.e. LP 2 NP, where LP is de ned as: Input: A, b, c, and a rational number , Question: Is minfcT x : Ax = b, x 0g ? If fx : Ax = b, x 0g is nonempty, we can use strong duality to show that LP is indeed equivalent to: Input: A, b, c, and a rational number , Question: Is maxfbT y : AT y cg ? which is also in NP, for the same reason as LP is. If the primal is infeasible, by Farkas' lemma we know the existence of a y s.t. AT y 0 and bT y = ,1 0. This completes the proof of the theorem. Theorem 16 LP 2 NP co-NP Proof: First, we prove that LP 2 NP. 13 Solving a Liner Program in Polynomial Time The rst polynomial-time algorithm for linear programming is the so-called ellipsoid algorithm which was proposed by Khachian in 1979 6 . The ellipsoid algorithm was in fact rst developed for convex programming of which linear programming is a special case in a series of papers by the russian mathematicians A.Ju. Levin and, D.B. Judin and A.S. Nemirovskii, and is related to work of N.Z. Shor. Though of polynomial running time, the algorithm is impractical for linear programming. Nevertheless it has extensive theoretical applications in combinatorial optimization. For example, the stable set problem on the so-called perfect graphs can be solved in polynomial time using the ellipsoid algorithm. This is however a non-trivial non-combinatorial algorithm. LP-22 In 1984, Karmarkar presented another polynomial-time algorithm for linear programming. His algorithm avoids the combinatorial complexity inherent in the simplex algorithm of the vertices, edges and faces of the polyhedron by staying well inside the polyhedron see Figure 13. His algorithm lead to many other algorithms for linear programming based on similar ideas. These algorithms are known as interior point methods. Figure 6: Exploring the interior of a convex body. It still remains an open question whether there exists a strongly polynomial algorithm for linear programming, i.e. an algorithm whose running time depends on m and n and not on the size of any of the entries of A, b or c. In the rest of these notes, we discuss an interior-point method for linear programming and show its polynomiality. High-level description of an interior-point algorithm: 1. If x current solution is close to the boundary, then map the polyhedron onto another one s.t. x is well in the interior of the new polyhedron see Figure 7. 2. Make a step in the transformed space. 3. Repeat a andb until we are close enough to an optimal solution. Before we give description of the algorithm we give a theorem, the corollary of which will be a key tool used in determinig when we have reached an optimal solution. LP-23 Theorem 17 Let x1; x2 be vertices of Ax = b; x 0: If cT x1 6= cT x2 then jcT x1 , cT x2j 2,2L : Proof: By Theorem 15, 9 qi, q2, such that 1 q1; q2 2L, and q1x1; q2x2 2 Nn . Furthermore, T T jcT x1 , cT x2j = q1cq x1 , q2cq x2 1 2 T x1 , cT x2 = q1q2c q q 1 2 since cT x1 , cT x2 6= 0, q1; q2 1 q 1q 1 2 1 ,2L since q ; q 1 2 2L . L 2L = 2 2 Corollary 18 Assume z = minfcT x : Ax = b; x 0g. | z polyhedron P Assume x is feasible to P , and such that cT x z + 2,2L. Then, any vertex x0 such that cT x0 cT x is an optimal solution of the LP. Proof: Suppose x0 is not optimal. Then, 9x, an optimal vertex, such that cT x = z. Since x0 is not optimal, cT x0 6= cT x, and by Theorem 17 cT x0 , cT x cT x0 2,2L cT x + 2,2L = Z + 2,2L cT x by de nition of x T x0 c by de nition of x0 cT x0 cT x0; a contradiction. What this corollary tells us is that we do not need to be very precise when choosing an optimal vertex. More precisely we only need to compute the objective function with error less than 2,2L . If we nd a vertex that is within that margin of error, then it will be optimal. LP-24 x x' P P' Figure 7: A centering mapping. If x is close to the boundary, we map the polyhedron P onto another one P 0, s.t. the image x0 of x is closer to the center of P 0. In the rest of these notes we present Ye's 9 interior point algorithm for linear programming. Ye's algorithm among several others achieves the best known asymptotic running time in the literature, and our presentation incorporates some simpli cations made by Freund 3 . We are going to consider the following linear programming problem: 8 minimize Z = cT x P subject to Ax = b; : x0 and its dual 8 maximize W = bT y D subject to AT y + s = c; : s 0: The algorithm is primal-dual, meaning that it simultaneously solves both the primal and dual problems. It keeps track of a primal solution x and a vector of dual slacks s i.e. 9 y : AT y = c , s such that x 0 and s 0. The basic idea of this algorithm is to stay away from the boundaries of the polyhedron the hyperplanes xj 0 and sj 0, j = 1; 2; : : : ; n while approaching optimality. In other words, we want to make the duality gap cT x , bT y = xT s 0 very small but stay away from the boundaries. Two tools will be used to achieve this goal in polynomial time. Tool 1: Scaling see Figure 7 Scaling is a crucial ingredient in interior point methods. The two types of scaling commonly used are projective scaling the one used by Karmarkar and a ne scaling the one we are going to use. LP-25 13.1 Ye's Interior Point Algorithm Suppose the current iterate is x 0 and s 0, where x = x1; x2; : : : ; xnT , then the a ne scaling maps x to x0 as follows. 0 x1 1 0x 1 1 B x1 C B x2 C B x2 C B C B x2 C B : C B C B C 0=B : C x = B : C ,! x B : C B C B C B C B C B : C B : C @ A @ xn A xn xn : Notice this transformation maps x to e = 1; : : : ; 1T . We can express the scaling transformation in matrix form as x0 = X ,1x or x = Xx0, where 0 1 x1 0 0 : : : 0 C B 0 x 0 ::: 0 C B B. 2 . . C . . C X = B .. B . . C B B 0 0 : : : xn,1 0 C C @ A 0 0 ::: 0 xn : Using matrix notation we can rewrite the linear program P in terms of the transformed variables as: minimize Z = cT Xx0 subject to AXx0 = b; x0 0: If we de ne c = Xc note that X = X T and A = AX we can get a linear program in the original form as follows. minimize Z = cT x0 subject to Ax0 = b; x0 0: We can also write the dual problem D as: maximize W = bT y subject to AX T y + Xs = c; Xs 0 or, equivalently, maximize W = bT y subject to AT y + s0 = c; s0 0 LP-26 where s0 = Xs, i.e. 0 B B 0=B s B B B @ xj sj = x0j s0j One can easily see that 3 s1x1 s2x2 : : sn xn 1 C C C C C C A : and, therefore, the duality gap xT s = Pj xj sj remains unchanged under a ne scaling. As a consequence, we will see later that one can always work equivalently in the transformed space. Our potential function is designed to measure how small the duality gap is and how far the current iterate is away from the boundaries. In fact we are going to use the following logarithmic barrier function". De nition 9 Potential Function, Gx; s n X 4 Gx; s = q lnxT s , lnxj sj ; for some q; j =1 8j 2 f1; : : : ; ng Tool 2: Potential Function where q is a parameter that must be chosen appropriately. Note that the rst term goes to ,1 as the duality gap tends to 0, and the second term goes to +1 as xi ! 0 or si ! 0 for some i. Two questions arise immediately concerning this potential function. Question 1: How do we choose q? Lemma 19 Let x; s 0 be vectors in Rn1. Then n ln xT s , n X j =1 ln xj sj n ln n: Proof: Given any n positive numbers t1; : : : ; tn, we know that their geometric mean does not exceed their arithmetic mean, i.e. 0n 1 0 n 11=n Y A 1 @X t A @ tj n j j =1 j =1 : LP-27 Taking the logarithms of both sides we have 0n 1 0n 1 1 @X ln t A ln @X t A , ln n: j n j=1 j j =1 Rearranging this inequality we get 0n 1 0n 1 X A @X A n ln @ tj , ln tj n ln n: j =1 j =1 In fact the last inequality can be derived directly from the concavity of the logarithmic function. The lemma follows if we set tj = xj sj . Since our objective is that G ! ,1 as xT s ! 0 since our primary goal is to get close to optimality, according to Lemma 19, we should choose some q n notice that ln xT s ! ,1 as xT s ! 0 . In particular, if we choose q = n + 1, thep algorithm will terminate after OnL iterations. In fact we are going to set q = n + n, which p gives us the smallest number | O nL | of iterations by this method. the optimum value to the primal problem. From Corollary 18, the following claim follows immediately. Question 2: When can we stop? Suppose that xT s 2,2L, then cT x , Z cT x , bT y = xT s 2,2L , where Z is Claim 20 If xT s 2,2L , then any vertex x satisfying cT x cT x is optimal. In order to nd x from x, two methods can be used. One is based on purely algebraic techniques but is a bit cumbersome to describe, while the other the cleanest one in literature is based upon basis reduction for lattices. We shall not elaborate on this topic, although we'll get back to this issue when discussing basis reduction in lattices. some constant k. Then Lemma 21 Let x; s be feasible primal-dual vectors such that Gx; s ,kpnL for xT s e,kL : Proof: By the de nition of Gx; s and the previous theorem we have: ,kpnL Gx; s n X p = n + n ln xT s , ln xj sj pn ln xT s + n ln n:j=1 LP-28 Rearranging we obtain ln xT s ,kL , n ln n ,kL: Therefore p xT s e,kL: The previous lemma and claim tell us that we can stop whenever Gx; s ,2pnL. In practice, the algorithm can terminate even earlier, so it is a good idea to check from time to time if we can get the optimal solution right away. Please notice that according to Equation 3 the a ne transformation does not change the value of the potential function. Hence we can work either in the original space or in the transformed space when we talk about the potential function. 14 Description of Ye's Interior Point Algorithm Initialization: Set i = 0. 0 0 0 = Choose x0 0, s0 0, that Ax0 = AT y0 + 0 pnL. Details are not and y suchclass but canb,be found sin = c and Gx ; s The O covered in the appendix. general idea is as follows. By augmenting the linear program with additional variables, it is easy to obtain a feasible solution. Moreover, by carefully choosing the augmented linear program, it is possible to have feasible primal and dual solutions x and s such that all xj 's and sj 's are large say 2L. This can be seen to result in a potential of p O nL. Iteration: while Gxi ; si ,2pnL 8 either a primal step changing xi only to get xi+1; si+1 do : or a dual step changing si only i := i + 1 The iterative step is as follows. A ne scaling maps xi; si to e; s0. In this transformed space, the point is far away from the boundaries. Either a dual or primal step occurs, giving ~; s and reducing the potential function. The point is x~ then mapped back to the original space, resulting in xi+1; si+1. Next, we are going to describe precisely how the primal or dual step is made such that 7 Gxi+1 ; si+1 , Gxi ; si , 120 0 p holds for either a primal or dual step, yielding an O nL total number of iterations. LP-29 a 1 a2 g g-d Null space of A d {x:Ax=0} Figure 8: Null space of A and gradient direction g. In order to nd the new point ~; s given the current iterate e; s0 remember x~ we are working in the transformed space, we compute the gradient of the potential function. This is the direction along which the value of the potential function changes at the highest rate. Let g denote the gradient. Recall that e; s0 is the map of the current iterate, we obtain g = rxGx; sje;s 0 1 1=x1 C B . = xq s s , B . C @ . A T 1=xn e;s q 4 = eT s0 s0 , e We would like to maximize the change in G, so we would like to move in the ~ direction of ,g. However, we must insure the new point is still feasible i.e. Ax = b. Let d be the projection of g onto the null space fx : Ax = 0g of A. Thus, we will move in the direction of ,d. 0 0 Claim 22 d = I , AA AT ,1Ag. Proof: Since g , d is orthogonal to the null space of A, it must be the combination of some row vectors of A. Hence we have Ad = 0 9w; s:t: AT w = g , d: This implies LP-30 ...
View Full Document

## This note was uploaded on 02/13/2012 for the course CSE 4101 taught by Professor Mirzaian during the Winter '12 term at York University.

Ask a homework question - tutors are online