This preview shows page 1. Sign up to view the full content.
Unformatted text preview: 5 Approximating MAXCUT
In this section, we illustrate the fact that improved approximation algorithms can be obtained by considering relaxations more sophisticated than linear ones. At the same time, we will also illustrate the fact that rounding a solution from the relaxation in a randomized fashion can be very useful. For this purpose, we consider approximation algorithms for the MAXCUT problem. The unweighted version of this problem is as follows: Given: A graph G = V; E . Find: A partition S; S such that dS := j S j is maximized. It can be shown that this problem is NPhard and MAX SNPcomplete and so we cannot hope for an approximation algorithm with guarantee arbitrarily close to 1 unless P = NP . In the weighted version of the problem each edge has a weight wij and we de ne dS by, X dS = wij : For simplicity we focus on the unweighted case. The results that we shall obtain will also apply to the weighted case. Recall that an approximation algorithm for MAXCUT is a polynomial time algorithm which delivers a cut S such that dS zMC where zMC is the value of the optimum cut. Until 1993 the best known was 0.5 but now it is 0.878 due to an approximation algorithm of Goemans and Williamson 14 . We shall rst of all look at three almost identical algorithms which have an approximation ratio of 0.5. 1. Randomized construction. We select S uniformly from all subsets of V . i.e. 1 For each i 2 V we put i 2 S with probability 2 independently of j 6= i. E dS = Pi;j2E Pr i; j 2 S by linearity of expectations = Pi;j2E Pr i 2 S; j 2 S or i 2 S; j 2 S = = = 1 jE j: 2 But clearly zMC jE j and so we have E dS 1 zMC . Note that by comparing 2 our cut to jE j, the best possible bound that we could obtain is 1 since for Kn 2 the complete graph on n vertices we have jE j = n and zMC = n42 . 2 2. Greedy procedure. Let V = f1; 2; : : : ; ng and let Ej = fi : i; j 2 E and i j g. It is clear that fEj : j = 2; : : : ; ng forms a partition of E . The algorithm is: Set S = f1g For j = 2 to n do if jS Ej j 1 jEj j 2 then S S fj g. Approx21
i;j 2E :i2S;j=S 2 If we de ne Fj = Ej S then we can see that fFj : j = 2; : : : ; ng is a partition of S . By de nition of the algorithm it is clear that jFj j jEj j . By 2 summing over j we get dS jEj zMC . In fact, the greedy algorithm can 2 2 be obtained from the randomized algorithm by using the method of conditional expectations. 3. Local search. Say that S is locally optimum if 8i 2 S : dS , fig dS and 8i 2 S : dS fig dS . = Lemma 6 If S is locally optimum then dS jEj . 2 Proof:
X dS = 1 fnumber of edges in cut incident to ig 2 i2V X = 1 j S ij 2 i2V X1 1 2 dij 2 i2V = 2 jE j 4 = jE j : 2 The inequality is true because if j S ij 1 j ij for some i then we can 2 move i to the other side of the cut and get an improvement. This contradicts local optimality. In local search we move one vertex at a time from one side of the cut to the other until we reach a local optimum. In the unweighted case this is a polynomial time algorithm since the number of di erent values that a cut can take is On2. In the weighted case the running time can be exponential. Haken and Luby 15 have shown that this can be true even for 4regular graphs. For cubic graphs the running time is polynomial 22 .
Over the last 1520 years a number of small improvements were made in the approximation ratio obtainable for MAXCUT. The ratio increased in the following manner: 1 ! 1 + 1 ! 1 + 1 ! 1 + n,1 2 2 2m 2 2n 2 4m where m = jE j and n = jV j, but asymptotically this is still 0:5. Approx22 5.1 Randomized 0.878 Algorithm The algorithm that we now present is randomized but it di ers from our previous randomized algorithm in two important ways. The event i 2 S is not independent from the event j 2 S . We compare the cut that we obtain to an upper bound which is better that jE j. r S _ S Figure 5: The sphere Sn . Suppose that for each vertex i 2 V we have a vector vi 2 Rn where n = jV j. Let Sn be the unit sphere fx 2 Rn : jjxjj = 1g. Take a point r uniformly distributed on Sn and let S = fi 2 V : vi r 0g Figure 5. Note that without loss of generality jjvijj = 1. Then by linearity of expectations: X 5 E dS = Pr signvi r 6= signvj r :
i;j 2E Lemma 7
Pr signvi r 6= signvj r = Pr random hyperplane separates vi and vj = where = arccosvi vj the angle between vi and vj . Proof: This result is easy to see but it is a little di cult to formalize. Let P be the 2dimensional plane containing vi and vj . Then P Sn is a circle. With probability 1, H = fx : x r = 0g intersects this circle in exactly two points s and t which are diametrically opposed. See gure 6. By symmetry s and t are uniformly distributed on the circle. The vectors vi and vj are separated by the hyperplane H if and only if either s or t lies on the smaller arc between vi and vj . This happens with probability 2 = . 2 Approx23 vi vj s t P Figure 6: The plane P . From equation 5 and lemma 7 we obtain: E dS =
Observe that E dS zMC and so X arccosvi vj : i;j 2E max E dS zMC ; vi where we maximize over all choices for the vi's. We actually have maxvi E dS = zMC . Let T be a cut such that dT = zMC and let e be the unit vector whose rst component is 1 and whose other components are 0. If we set e vi = ,e if i 2 T otherwise. then S = T with probability 1. This means that E dS = zMC . Corollary 8
zMC = jjmax v jj=1
i X arccosvi vj : i;j 2E Unfortunately this is as di cult to solve as the original problem and so at rst glance we have not made any progress. Approx24 5.2 Choosing a good set of vectors
i;j 2E Let f : ,1; 1 ! 0; 1 be a function which satis es f ,1 = 1, f 1 = 0. Consider the following program: X Max f vi vj P subject to: jjvijj = 1 i 2 V: If we denote the optimal value of this program by zP then we have zMC zP : This is because if we have a cut T then we can let, e vi = ,e if i 2 T otherwise. Hence P f v v = dT and z z follows immediately.
i;j 2E i j MC P 5.3 The Algorithm The framework for the 0.878 approximation algorithm for MAXCUT can now be presented. 1. Solve P to get a set of vectors fv1 ; : : : ; vng. 2. Uniformly select r 2 Sn. 3. Set S = fi : vi r 0g. Theorem 9
where, E dS zP zMC
= ,1x1 arccosx : min f x Proof: E dS = X arccosvi vj i;j 2E X f vi vj i;j 2E = zP zMC :
Approx25 We must now choose f such that P can be solved in polynomial time and is as large as possible. We shall show that P can be solved in polynomial time whenever f is linear and so if we de ne, f x = 1 , x 2 then our rst criterion is satis ed. Note that f ,1 = 1 and f 1 = 0. With this choice of f , = ,1x1 21 , xx min arccos :689 = 2 arccos,:0689 1 , ,0 = 0:87856: See gure 7. 1 1x 2 arccos(x) 0 1 0.689 1 Figure 7: Calculating . 5.4 Solving P We now turn our attention to solving: X 1 Max 2 1 , vi vj P subject to:
i;j 2E jjvijj = 1
Approx26 i 2 V: Let Y = yij where yij = vi vj . Then, jjvijj = 1 yii = 1 for all i. yij = vi vj Y 0, where Y 0 means that Y is positive semide nite: 8x : xT Y x 0. This is true because, XX xT Y x = xixj vi vj i = X
i j xivi 2 0: Conversely if Y 0 and yii = 1 for all i then it can be shown that there exists a set of vi's such that yij = vi vj . Hence P is equivalent to, X 1 1 , yij Max i;j 2E 2 subject to: 0 P Y 0 yii = 1 i 2 V: Note that Q := fY : Y 0; yii = 1g is convex. If A 0 and B 0 then A + B 0 and also A+B 0. It can be shown that maximizing a concave function over a 2 convex set can be done in polynomial time. Hence we can solve P 0 in polynomial time since linear functions are concave. This completes the analysis of the algorithm. 5.5 Remarks 1. The optimum Y could be irrational but in this case we can nd a solution with an arbitrarily small error in polynomial time. 2. To solve P 0 in polynomial time we could use a variation of the interior point method for linear programming. 3. Given Y , vi can be obtained using a Cholesky factorization Y = V V T . 4. The algorithm can be derandomized using the method of conditional expectations. This is quite intricate. 5. The analysis is very nearly tight. For the 5cycle we have zMC and zP = p 5 1 + cos = 25+5 5 which implies that zMC = 0:88445. 2 5 8 zP Approx27 6 Bin Packing and P k Cmax
One can push the notion of approximation algorithms a bit further than we have been doing and de ne the notion of approximation schemes: De nition 4 A polynomial approximation scheme pas is a family of algorithms
A :
0 such that for each 0, A is a 1 + approximation algorithm which runs in polynomial time in input size for xed . De nition 5 A fully polynomial approximation scheme fpas is a pas with running
time which is polynomial both in input size and 1= . It is known that if is a strongly NP complete problem, then has no fpas unless P = NP . From the result of Arora et al. described in Section 2, we also know that there is no pas for any MAX , SNP hard problem unless P = NP . We now consider two problems which have a very similar avor; in fact, they correspond to the same NP complete decision problem. However, they considerably di er in terms of approximability: one has a pas, the other does not. Bin Packing: Given item sizes a1; a2; : : : ; an 0 and a bin size of T, nd a partition of I1; : : : ; Ik of 1; : : : ; n, such that Pi2I ai T and k is minimized
the items in Il are assigned to bin l. P k Cmax : Given n jobs with processing times p1 ; : : : ; pn and m machines, nd aP partition fI1; : : : ; Img of f1; : : : ; ng, such that the makespan de ned as maxi j2I pj is minimum. The makespan represents the maximum completion time on any machine given that the jobs in Ii are assigned to machine i.
l i The decision versions of the two problems are identical and NPcomplete. However when we consider approximation algorithms for the two problems we have completely di erent results. In the case of the bin packing problem there is no approximation algorithm with 3=2, unless P = NP . Proposition 10 There is no approximation algorithm with
ing, unless P = NP , as seen in Section 2. 3=2, for bin pack . However, we shall see, for P k Cmax we have approximation algorithms for any De nition 6 An algorithm A has an asymptotic performance guarantee of if lim sup k k!1
Approx28 AI I :OPT I =k OPT I and OPT I denotes the value of instance I and AI denotes the value returned by
k where = sup algorithm A. For P k Cmax, there is no di erence between an asymptotic performance and a performance guarantee. This follows from the fact that P k Cmax satis es a scaling property : an instance with value OPT I can be constructed by multiplying every processing time pj by . Using this de nition we can analogously de ne a polynomial asymptotic approximation scheme paas. And a fully polynomial asymptotic approximation scheme fpaas. Now we will state some results to illustrate the di erence in the two problems when we consider approximation algorithms. 1. For bin packing, there does not exist an approximation algorithm with 3=2, unless P = NP . Therefore there is no pas for bin packing unless P = NP . 2. For P k Cmax there exists a pas. This is due to Hochbaum and Shmoys 17 . We will study this algorithm in more detail in today's lecture. 3. For bin packing there exists a paas. Fernandez de la Vega and Lueker 7 . 4. For P k Cmax there exists no fpaas unless P = NP . This is because the existence of a fpaas implies the existence of a fpas and the existence of a fpas is ruled out unless P = NP because, P k Cmax is strongly NPcomplete. 5. For bin packing there even exists a fpaas. This was shown by Karmarkar and Karp 18 . 6.1 Approximation algorithm for P Cmax
jj We will now present a polynomial approximation scheme for the P jjCmax scheduling problem. We analyze a pas for P jjCmax, discovered by Hochbaum and Shmoys 17 . The idea is to use a relation similar to the one between an optimization problem and its decision problem. That is, if we have a way to solve decision problem, we can use binary search to nd the exact solution. Similarly, in order to obtain a 1 + approximation algorithm, we are going to use a socalled 1 + relaxed decision version of the problem and binary search. De nition 7 1 + relaxed decision version of P jjCmax is a procedure that given
and a deadline T , returns either: Approx29 m ... 2 1 T ... pj
m max p j Figure 8: List scheduling.
no"  if there does not exist a schedule with makespan yes"  if a schedule with makespan T , or 1 + T exists. In case of yes", the actual schedule must also be provided. Notice that in some cases both answers are valid. In such a case, we do not care if the procedure outputs yes" or no". Suppose we have such a procedure. Then we use binary search to nd the solution. To begin our binary search, we must nd an interval where optimal Cmax is contained. Notice that Pj pj =m is an average load per machine and maxj pj is the length of the longest job. We can put a bound on optimum Cmax as follows: Lemma 11 Let
then L Cmax 2L. P 1 0 j pj L = max @max pj ; m A j Proof:P Since the longest job must be completed,we have maxj pj Cmax. Also, since j pj =m is the average load, we have Pj pj =m Cmax. Thus, L Cmax. The upper bound relies on the concept of list scheduling, which dictates that a job is never processed on some machine, if it can be processed earlier on another machine. That is, we require that if there is a job waiting, and an idle machine, we must use this machine to do the job. We claim that for such a schedule Cmax 2L. Consider the job that nishes last, say job k. Notice that when it starts, all other machines are busy. Moreover, the time elapsed up to that point is no more than the average Approx30 ...
View
Full
Document
 Winter '12
 Mirzaian
 Algorithms, Data Structures

Click to edit the document details