approx-2x2

approx-2x2 - Approximation algorithms As we’ve seen, some...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Approximation algorithms As we’ve seen, some optimisation problems are “hard” (by hardness of related decision problem), little chance of finding poly-time algorithm that computes optimal solution • largest clique Consider optimisation problem. Each potential solution has positive cost (or quality). We want near-optimal solution. Depending on problem, optimal solution may be one with • maximum possible cost (maximisation problem), like maximum clique, • or one with minimum possible cost (minimisation problem), like minimum vertex cover. Algorithm has approximation ratio of ρ(n), if for any input of size n, the cost C of its solution is within factor ρ(n) of cost of optimal solution C ∗, i.e. max ￿ • smallest vertex cover • ... • largest independent set But: sometimes sub-optimal solutions are kind of OK • pretty large clique • pretty small vertex cover • ... • pretty large independent set C C∗ , C∗ C ￿ ≤ ρ(n ) if algorithms run in poly time (preferrably small exponents). Approximation algorithms compute near-optimal solutions. Known for thousands of years. For instance, approximations of value of π ; some engineers still use 4 these days :-) Approximation algorithms 1 For maximisation problems, 0 < C ≤ C ∗, thus C ∗/C gives factor by which optimal solution is better than approximate solution (note: C ∗/C ≥ 1 and C/C ∗ ≤ 1). For minimisation problems, 0 < C ∗ ≤ C , thus C/C ∗ gives factor by which optimal solution is better than approximate solution (note C/C ∗ ≥ 1 and C ∗/C ≤ 1). Approximation algorithms 2 Approximation ratio is never less than one: C C∗ <1 ⇒ >1 ∗ C C An algorithm with guaranteed approximation ration of ρ(n) is called a ρ(n)-approximation algorithm. A 1-approximation algorithm is optimal, and the larger the ratio, the worse the solution. • For many N P -complete problems, constant-factor approximations (i.e. computed clique is always at least half the size of maximum-size clique), • sometimes in best known approx ratio grows with n, • and sometimes even proven lower bounds on ratio (for every approximation alg, the ratio is at least this and that, unless P = N P ). Sometimes better ratio when spending more computation time. An approximation scheme for an optimisation problem is an approximation algorithm that takes as input an instance plus a parameter ￿ > 0 s.t. for any fixed ￿, the scheme is a (1 + ￿)-approximation (trade-off). Approximation algorithms 3 A scheme is a poly-time approximation scheme (PAS) if for any fixed ￿ > 0, it runs in time polynomial in input size. Running time can increase dramatically with decreasing ￿, consider e.g. T (n) = n2/￿. n 101 102 103 104 ￿ 2 T (n ) n 101 102 103 104 1 n2 102 104 106 108 1 /2 n4 104 108 1012 1016 1 /4 n8 108 1016 1024 1032 1/100 n200 10200 10400 10600 10800 We want: if ￿ decreases by constant factor, then running time increases by at most some other constant factor, i.e., running time is polynomial in n and 1/￿. Ex: T (n) = (2/￿) · n2, T (n) = (1/￿)2 · n3. Such a scheme is called a fully polynomial-time approximation scheme (FPAS). Approximation algorithms 4 Vertex cover Problem: given graph G = (V, E ), find smallest V ￿ ⊆ V s.t. if (u, v ) ∈ E , then u ∈ V ￿ or v ∈ V ￿ or both. Decision problem is N P -complete, optimisation problem is at least as hard. Here is a trivial 2-approximation algorithm (no scheme, the “2” is fixed). Input is some graph G = (V, E ). A PPROX -V ERTEX -C OVER 1: C ← ∅ 2: E ￿ ← E 4: Example b c d a e Input graph f g b c d b c d a e f g a e f g Step 1: choose edge (c,e) b c d b Step 2: choose edge (d,g) c d a e f g a e Result, size 6 f g 3: while E ￿ ￿= ∅ do Step 3: choose edge (a,b) let (u, v ) be an arbitrary edge of E ￿ 5: C ← C ∪ {(u, v )} 6: remove from E ￿ all edges incident on either u or v 7: end while Claim: after termination, C is a vertex cover of size at most twice the size of an optimal (smallest) one. Approximation algorithms 5 b c d a e Optimal result, size 3 f g Approximation algorithms 6 Theorem. A PPROX -V ERTEX -C OVER is a poly-time 2-approximation algorithm. Proof. The running time is trivially bounded by O(V E ) (at most |E | iterations, each of complexity at most O(V )). However, O(V + E ) can easily be shown. Correctness: C clearly is a vertex cover. Size of the cover: let A denote set of edges that are picked ({(c, e), (d, g ), (a, b)} in example). 1. In order to cover edges in A, any vertex cover, in particular an optimal cover C ∗, must include at least one endpoint of each edge in A. By construction of the algorithm, no two edges in A share an endpoint (once edge is picked, all edges incident on either endpoint are removed). Therefore, no two edges in A are covered by the same vertex in C ∗, and |C ∗| ≥ |A|. 2. When an edge is picked, neither endpoint is already in C , thus |C | = 2|A|. Combining (1) and (2) yields | C | = 2 | A | ≤ 2| C ∗ | Approximation algorithms Interesting observation: we could prove that size of VC returned by alg is at most twice the size of optimal cover, without knowing the latter. How? We lower-bounded size of optimal cover (the |C ∗| ≥ |A| thing). One can show that A is in fact a maximal matching in G. The size of any maximal matching is always a lower bound on the size of an optimal vertex cover (each edge has to be covered). The alg returns VC whose size is twice the size of the maximal matching A. Then it’s a simple matter of relating the size of the solution returned to the lower bound. (q.e.d.) 7 Approximation algorithms 8 The travelling-salesman problem TSP with triangle inequality We’ll again compute some structure whose weight is lower bound for length of optimal TSP tour. Problem: given complete, undirected graph G = (V, E ) with non-negative integer cost c(u, v ) for each edge, find cheapest hamiltonian cycle of G. Consider two cases: with and without triangle inequality. c satisfies triangle inequality, if it is always cheapest to go directly from some u to some w; going by way of intermediate vertices can’t be less expensive. Related decision problem is N P -complete in both cases. Input: G = (V, E ), c : E → I R A PPROX -TSP-TOUR 1: Select arbitrary v ∈ V to be “root” 2: Compute MST T for G and c from root r using Was maximal matching, is minimum spanning tree now. We use function MST-P RIM(G, c, r ), which computes an MST for G and weight function c, given some arbitrary root r . MST-P RIM(G, c, r ) 3: Let L be list of vertices visited in pre-order tree walk of T 4: Return the hamiltonian cycle that vistis the vertices in the order L Approximation algorithms 9 Approximation algorithms 10 a d e a d e Theorem. A PPROX -TSP-TOUR is a poly-time 2-approximation algorithm for the TSP problem with triangle inequality. g b c h f g c b f Proof. Polynomial running time obvious, simple MST-P RIM takes Θ(V 2), computing preorder walk takes no longer. Correctness obvious, preorder walk is always a tour. Let H ∗ denote an optimal tour for given set of vertices. h MST, root a Set of points, lie in grid a d e a d e b c h Walk f g c b f g Deleting any edge from H ∗ gives a spanning tree. Thus, weight of minimum spanning tree is lower bound on cost of optimal tour: c (T ) ≤ c (H ∗ ) h Resulting tour, cost ca. 19.1 a d e b c h f g A full walk of T lists vertices when they are first visited, and also when they are returned to, after visiting a subtree. Ex: a,b,c,b,h,b,a,d,e,f,e,g,e,d,a Optimal tour, cost ca. 14.7 Approximation algorithms 11 Approximation algorithms 12 Full walk W traverses every edge exactly twice (although some vertex perhaps way more often), thus c (W ) = 2 c(T ) Together with c(T ) ≤ c(H ∗), this gives This ordering (with multiple visits deleted) is identical to that obtained by preorder walk of T (with each vertex visited only once). It certainly is a Hamiltonian cycle. Let’s call it H . H is just what is computed by A PPROX -TSP-TOUR. H is obtained by deleting vertices from W , thus c (H ) ≤ c (W ) Conclusion: c(H ) ≤ c(W ) ≤ 2c(H ∗) c(W ) = 2c(T ) ≤ 2c(H ∗) We want to find connection between cost of W and cost of “our” tour. Problem: W is in general not a proper tour, since vertices may be visited more than once. . . But: by our friend, the triangle inequality, we can delete a visit to any vertex from W and cost does not increase. Deleting a vertex v from walk W between visits to u and w means going from u directly to w, without visiting v . This way, we can consecutively remove all multiple visits to any vertex. Ex: full walk a,b,c,b,h,b,a,d,e,f,e,g,e,d,a becomes a,b,c,h,d,e,f,g. (q.e.d.) Although factor 2 looks nice, there are better algorithms. There’s a 3/2 approximation algorithm by Christofedes (with triangle inequality). Arora and Mitchell have shown that there is a PAS if the points are in the Euclidean plane (meaning the triangle inequality holds). Approximation algorithms 14 Approximation algorithms 13 The general TSP Consider TSP instance ￿G￿, c￿. If original graph G has a Hamiltonian cycle H , then c assigns cost of one to reach edge of H , and G￿ contains tour of cost |V |. Otherwise, any tour of G￿ must contain some edge not in E , thus have cost at least (ρ · |V | + 1) + (|V | − 1) = ρ · |V | + |V | > ρ · |V | ￿ ￿￿ ￿∈E ￿ ￿ ￿￿ ∈E ￿ Now c does no longer satisfy triangle inequality. Theorem. If P ￿= N P , then for any constant ρ ≥ 1, there is no poly-time ρ-approximation algorithm for the general TSP. Proof. By contradiction. Suppose there is a poly-time ρapproximation algorithm A, ρ ≥ 1 integer. We use A to solve H AMILTON -C YCLE in poly time (this implies P = N P ). Let G = (V, E ) be instance of H AMILTON -C YCLE. Let G￿ = (V, E ￿) the complete graph on V : E ￿ = {(u, v ) : u, v ∈ V ∧ u ￿= v } We assign costs to edges in E ￿: c(u, v ) = ￿ Apply A to ￿G￿, c￿. By assumption, A returns tour of cost at most ρ times the cost of optimal tour. Thus, if G contains Hamiltonian cycle, A must return it. If G is not Hamiltonian, A returns tour of cost > ρ · |V |. We can use A to decide H AMILTON -C YCLE. (q.e.d.) 1 if (u, v ) ∈ E ρ · |V | + 1 otherwise Creating G￿ and c from G certainly possible in poly time. Approximation algorithms 15 Approximation algorithms 16 The proof was example of general technique for proving that a problem cannot be approximated well. Suppose given minimisation problem Y , pick related N P hard problem X s.t. • “yes” instances of X correspond to instances of Y with value at most some k, • “no” instances of X correspond to instances of Y with value greater than ρk Then there is no ρ-approximation algorithm for Y unless P = N P (because otherwise could decide X by running Y ). Randomised approximation A randomised algorithm has an approximation ratio of ρ(n) if, for any input of size n, the expected cost C is within a factor of ρ(n) of cost C ∗ of optimal solution. max ￿ C C∗ , C∗ C ￿ ≤ ρ (n ) So, just like with “standard” algorithm, except the approximation ratio is for the expected cost. Consider 3-C NF -S AT, problem of deciding whether or not a given formula in 3CNF is satisfiable. 3-C NF -S AT is N P -complete. Q: What could be a related optimisation problem? Approximation algorithms 17 Approximation algorithms 18 A: M AX -3-C NF Even if some formula is perhaps not satisfiable, we might be interested in satisfying as many clauses as possible. Assumption: each clause consists of exactly three distinct literals, and does not contain both a variable and its negation (so, we can not have x ∨ x ∨ y or x ∨ x ∨ y ). Randomised algorithm: This means Yi = 1 if at least one of the three literals in clause i has been set to 1. By assumption, settings of all three literals are independent. A clause is not satisfied iff all three literals are set to 0, thus P (Y i = 0 ) = and therefore ￿ ￿3 1 2 = 1 8 7 8 7 8 P (Y i = 1) = 1 − and ￿ ￿3 1 2 = Independently, set each variable to 1 with probability 1/2, and to 0 with probability 1/2. Theorem. Given an instance of M AX -3-C NF with n variables x1, x2, . . . , xn and m clauses, the described randomised algorithm is a randomised 8/7-approximation algorithm. Proof. Define indicator variables Y1, Y2, . . . , Ym with Yi = ￿ E [Yi] = 0 · P (Yi = 0) +1 · P (Yi = 1) = P (Yi = 1) = Let Y be number of satisfied clauses, i.e. Y = Y1 +· · ·+Ym. By linearity of expectation, E [Y ] = E m ￿ i=1 1 clause i is satisfied by the alg’s assignment 0 otherwise m is upper bound on number of satisfied clauses, thus approximation ratio is at most OP T m 8 ≤7 = our sol’n 7 ·m 8 Yi = i=1 m ￿ E [Y i ] = i=1 8 m ￿7 = 7 ·m 8 (q.e.d.) 20 Approximation algorithms 19 Approximation algorithms An approximation scheme Implementation could look as follows. In iteration i, the alg computes sums of all subsets of {x1, x2, . . . , xi}. As starting point, it uses all sums of subsets of {x1, x2, . . . , xi−1}. Once a particular subset S ￿ has sum exceeding t, no reason to maintain it: no superset of S ￿ can possibly be a solution. Iteratively compute Li, list of sums of all subsets of {x1, x2, . . . , xi} that do not exceed t. Return the maximum value in Ln. If L is a list of positive integers and x is another positive integer, then L + x denotes list derived from L with each element of L increased by x. Ex: L = ￿4, 3, 2, 4, 6, 7￿, L + 3 = ￿7, 6, 5, 7, 9, 10￿ We also use this notation for sets: S + x = {s + x : s ∈ S }. An instance of the S UBSET-S UM problem is a pair ￿S, t￿ with S = {x1, x2, . . . , xn} a set of positive integers, and t a positive integer. The decision problem asks whether there is a subset of S that adds up to t. S UBSET-S UM is N P -complete. In the optimisation problem we wish to find a subset of S whose sum is as large as possible but not larger than t. An exponential-time algorithm Just enumerate all subsets of S and pick the one with largest sum that does not exceed t. There are 2n possible subsets (an item is “in” or “out”), so this takes time O(2n). Approximation algorithms 21 Approximation algorithms 22 Let M ERGE -L IST(L, L￿) return sorted list that is merge of sorted L and L￿ with duplicates removed. Running time is O (|L | + | L ￿ |). E XACT-S UBSET-S UM(S = {x1, x2, . . . , xn}, t) 1: L0 ← ￿0￿ 3: 2: for i ← 1 to n do Clearly, P i = P i − 1 ∪ (P i − 1 + x i ) Can prove by induction on i that Li is a sorted list containing every element of Pi with value at most t. Length of Li can be 2i, thus E XACT-S UBSET-S UM is an exponential time algorithm in general. However, in special cases it is poly-time if t is polynomial in |S |, or if all xi are polynomial in |S |. Li ← M ERGE -L IST(Li−1, Li−1 + xi) 4: remove from Li every element that is greater than t 5: end for 6: return the largest element in Ln How does it work? Let Pi denote set of all values that can be obtained by selecting a (possibly empty) subset of {x1, x2, . . . , xi} and summing its members. Ex: S = {1, 4, 5}, then P1 = {0, 1} P2 = {0, 1, 4, 5} P3 = {0, 1, 4, 5, 6, 9, 10} Approximation algorithms 23 Approximation algorithms 24 A fully-polynomial approximation scheme Recall: running time must be polynomial in both 1/￿ and n. Basic idea: modify exact exponential time algorithm by trimming each list Li after creation: If two values are “close”, then we don’t maintain both of them (will give similar approximations). Precisely: given “trimming parameter” δ with 0 < δ < 1, then from a given list L we remove as many elements as possible, such that if L￿ is the result, for every element y that is removed, there is an element z still in L￿ that “aproximates” y: y ≤z ≤y 1+δ Note: “one-sided error” We say z represents y in L￿. Each removed y is represented by some z satisfying the condition from above. Approximation algorithms 25 Example: δ = 0.1, L = ￿10, 11, 12, 15, 20, 21, 22, 23, 24, 29￿ We can trim L to L￿ = ￿10, 12, 15, 20, 23, 29￿ 11 is represented by 10 21, 22 are represented by 20 24 is represented by 23 Given list L = ￿y1, y2, . . . , ym￿ with y1 ≤ y2 ≤ · · · ≤ ym, the following function trims L in time Θ(m). T RIM(L, δ ) 1: L￿ = ￿y1 ￿ 2: last= y1 4: 3: for i ← 2 to m do 5: 6: 7: 8: 9: if yi > last · (1 + δ ) then /* yi ≥last because L is sorted */ append yi onto end of L￿ last← yi end if end for Elements of L are scanned in non-decreasing order; some number is put into L￿ only if it is the first element of L, or of it can’t be represented by the most recent number (“last”) placed into L￿. Approximation algorithms 26 Now we can construct our approximation scheme. Input is S = {x1, x2, . . . , xn}, xi integer, target integer t, and “approximation parameter” ￿ with 0 < ￿ < 1. It will return value z whose value is within 1 + ￿ factor of optimal solution. A PPROX -S UBSET-S UM(S = {x1, x2, . . . , xn}, t, ￿) 1: L0 ← ￿0￿ 2: for i ← 1 to n do 3: 4: Example S = {104, 102, 201, 101}, t = 308, ￿ = 0.4 δ = ￿/2n = 0.4/8 = 0.05 line 1 3 4 5 3 4 5 3 4 5 3 4 5 L0 L1 L1 L1 L2 L2 L2 L3 L3 L3 L4 L4 L4 = = = = = = = = = = = = = ￿ 0￿ ￿0, 104￿ ￿0, 104￿ ￿0, 104￿ ￿0, 102, 104, 206￿ ￿0, 102, 206￿ ￿0, 102, 206￿ ￿0, 102, 201, 206, 303, 407￿ ￿0, 102, 201, 303, 407￿ ￿0, 102, 201, 303￿ ￿0, 101, 102, 201, 203, 302, 303, 404￿ ￿0, 101, 201, 302, 404￿ ￿0, 101, 201, 302￿ Li ← M ERGE -L IST(Li−1, Li−1 + xi) Li ← T RIM(Li, ￿/2n) 5: remove from Li every element that is greater than t 6: end for 7: return z ∗ , the largest element in Ln Alg returns z ∗ = 302, well within ￿ = 40% of optimal answer 307 = 104 + 102 + 101 (in fact, within 2%). Approximation algorithms 27 Approximation algorithms 28 Theorem. A PPROX -S UBSET-S UM is fully polynomial approximation scheme for the subset-sum problem. Proof. Trimming Li and removing from Li every element that is greater than t maintain property that every element of Li is member of Pi. Thus, z ∗ is sum of some subset of S . Let y ∗ ∈ Pn denote an optimal solution. Clearly, z ∗ ≤ y ∗ (have removed elements that are too large). Need to show that y ∗/z ∗ ≤ 1 + ￿ and that running time is polynomial in n and 1/￿. Can be shown (by induction) that ∀y ∈ Pi with y ≤ t there is some z ∈ Ln with y ≤z≤y (1 + ￿/2n)i This also holds for y ∗ ∈ Pn, thus there is some z ∈ Ln with y∗ ≤ z ≤ y∗ (1 + ￿/2n)i ￿ ￿ y∗ ￿n ≤ 1+ z 2n 29 z ∗ is largest value in Ln, thus ￿ ￿ y∗ ￿n ≤ 1+ z∗ 2n Remains to show that y ∗/z ∗ ≤ 1 + ￿. We know (1 + a/n)n ≤ ea, and therefore ￿ 1+ ￿ ￿ ￿ ￿n ￿ 2n·(1/2) = 1+ 2n 2n ￿￿ ￿ ￿ 1/ 2 ￿ 2n = 1+ 2n = = e ￿ /2 This, together with ≤ (e￿)1/2 ￿￿ 1+ ￿ ￿ 1/ 2 ￿ 2n 2n e￿/2 ≤ 1 + ￿/2 + (￿/2)2 ≤ 1 + ￿ gives ￿ ￿ y∗ ￿n ≤ 1+ ≤1+￿ z∗ 2n and therefore Approximation algorithms Approximation algorithms 30 Approximation ratio OK, but what with running time? We derive bound on |Li|. After trimming, successive elements z and z ￿ of Li fulfill z ￿/z > 1 + ￿/2n. Thus, each list contains 0, possibly 1, and at most ￿log1+￿/2n t￿ additional values. We have |Li| ≤ (log1+￿/2n t) + 2 ln t = +2 ln(1 + ￿/2n) 2n(1 + ￿/2n) ln t ≤ +2 ￿ /* because of x/(1 + x) ≤ ln(1 + x) ≤ x */ 4n ln t ≤ +2 ￿ /* because of 0 < ￿ < 1 */ This is polynomial in size of input (log t bits for t, plus bits for x1, x2, . . . , xn). Thus, it’s polynomial in n and 1/￿. Running time of A PPROX -S UBSET-S UM is polynomial in lengths of Li, thus... (q.e.d.) Approximation algorithms 31 ...
View Full Document

Ask a homework question - tutors are online