Advanced Algorithms 2.9

Advanced Algorithms - load of the machines see Figure 8 Therefore P p 6= Cmax jmk j pk Pp j j max p m j jP j 2 max max pj mpj j = 2L Now we have an

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: load of the machines see Figure 8. Therefore, P p 6= Cmax jmk j + pk Pp j j + max p m j jP ! j 2 max max pj ; mpj j = 2L: Now we have an interval on which to do a logarithmic binary search for Cmax. By T1 and T2 we denote lower and upper bound pointers we are going to use in our p binary search. Clearly, T = T1T2 is the midpoint in the logarithmic sense. Based on Lemma 11, we must search for the solution in the interval L; : : : ; 2L . Since we use logarithmic scale, we set log T1 = log2 L, log T2 = log2 L + 1 and log T = 1 log T + log T . 2 1 2 2 2 When do we stop? The idea is to use di erent value of . That is, the approximation algorithm proceeds as follows. Every time, the new interval is chosen depending on whether the procedure for the 1 + =2-relaxed decision version returns a no" p or in case of yes" a schedule with makespan 1 + =2T , where T = T1T2 and T1; : : : ; T2 is the current interval. The binary search continues until the bounds 1+ T1; T2 satisfy the relation T 1+ =2 1 + , or equivalently T 1+ =2 . The number T T of iterations required to satisfy this relation is Olg1= . Notice that this value is a constant for a xed . At termination, the makespan of the schedule corresponding to T2 will be within a factor of 1 + of the optimal makespan. 2 2 1 1 In order to complete the analysis of the algorithm, it remains to describe the procedure for the 1 + =2-relaxed decision procedure for any . Intuitively, if we look at what can go wrong in list scheduling, we see that it is governed" by long" jobs, since small jobs can be easily accommodated. This is the approach we take, when designing procedure that solves the 1 + =2-relaxed decision version of the problem. For the rest of our discussion we will denote =2 by 0. Given fpj g; 0 and T , the procedure operates as follows: Step 1: Remove all small jobs with pj 0T . Step 2: Somehow to be speci ed later solve the 1 + 0-relaxed decision version of the problem for the remaining big jobs. Step 3: If answer in step 2 is no", then return that there does not exist a schedule with makespan T . If answer in step 2 is yes", then with a deadline of 1 + 0 T put back all small jobs using list scheduling i.e. greedy strategy, one at a time. If all jobs are Approx-31 deadline (1+) deadline Figure 9: Scheduling small" jobs. accommodated then return that schedule, else return that there does not exist a schedule with makespan T . Step 3 of the algorithm gives the nal answer of the procedure. In case of a yes" it is clear that the answer is correct. In case of a no" that was propagated from Step 2 it is also clear that the answer is correct. Finally, if we fail to put back all the small jobs we must also show that the algorithm is correct. Let us look at a list schedule in which some small jobs have been scheduled but others couldn't see Figure 9. If we cannot accomodate all small jobs with a deadline of 1 + 0T , it means that all machines are busy at time T since the processing time of each small job is 0T . Hence, the average load per processor exceeds T . Therefore, the answer no" is correct. Now, we describe Step 2 of the algorithm for pj 0T . Having eliminated the small jobs, we obtain a constant when is xed upper bound on the number of jobs processed on one machine. Also, we would like to have only a small number of distinct processing times in order to be able to enumerate in polynomial time all possible schedules. For this purpose, the idea is to use rounding. Let qj be the largest number of the form 0T + k 02T pj for some k 2 N. A re nement of Step 2 is the following. 2.1 Address the decision problem: Is there a schedule for fqj g with makespan T ? 2.2 If the answer is no", then return that there does not exist a schedule with makespan T . If the answer is yes", then return that schedule. The Lemma that follows justi es the correctness of the re ned Step 2. Lemma 12 Step 2 of the algorithm is correct. Approx-32 Proof: If Step 2.1 returns no", then it is clear that the nal answer of Step 2 should be no", since qj pj . If Step 2.1 returns yes", then the total increase of the makespan due to the replacement of qj by pj is no greater than 1= 0 02T = 0T . This is true, because we have at most T= 0T = 1= 0 jobs per machine, and because pj qj + 02T by de nition. Thus, the total length of the schedule with respect to fpj g is at most T + 0T = 1 + 0T . It remains to show how to solve the decision problem of Step 2.1. We can achieve this in polynomial time using dynamic programming. Note that the input to this decision problem is knice": We have at most P = b1= 0c jobs per machine, and at j most Q = 1 + 1, distinct processing times. Since 0 is considered to be xed, we essentially have a constant number of jobs per machine and a constant number 0 0 q1; : : : ; qQ of processing times. Let ~ = fn1; : : : ; nQg, where ni denotes the number of n jobs whose processing time is qi. We use the fact that the decision problems of P jjCmax and the bin packing problems are equivalent. Let f ~ denote the minimum number n of machines needed to process ~ by time T . Finally, let R = f~ = r1; : : : ; rQ : r PQ r q0 T; r n ; r 2 N. Rnrepresents the sets of jobs that can be processed on i i i i=1 i i a single machine with a deadline of T . The recurrence for the dynamic programming formulation of the problem is 0 02 f ~ = 1 + min f ~ , ~; n n r ~2R r namely we need one machine to accomodate the jobs in ~ 2 R and f ~ , ~ machines r n r to accomodate the remaining jobs. In order to compute this recurrence we rst have to compute the at most QP vectors in R. The upper bound on the size of R comes from the fact that we have at most P jobs per machine and each job can have one of at most Q processing times. Subsequently, for each one of the vectors in R we have to iterate for nQ times, since ni n and there are Q components in ~ . Thus, the n running time of Step 2.1 is On1= 1= 021= . From this point we can derive the overall running time of the pas in a straightforward manner. Since Step 2 iterates Olg1= times and since = 2 0, the overall running time of the algorithm is On1= 1= 21= lg1= . 02 0 2 7 Randomized Rounding for Multicommodity Flows In this section, we look at using randomness to approximate a certain kind of multicommodity ow problem. The problem is as follows: given a directed graph G = V; E , with sources si 2 V and sinks ti 2 V for i = 1; : : : ; k, we want to nd a path Pi from si to ti for 1 i k such that the width" or congestion" of any edge is as small as possible. The width" of an edge is de ned to be the number of paths using that edge. This multicommodity ow problem is NP-complete in general. Approx-33 The randomized approximation algorithm that we discuss in these notes is due to Raghavan and Thompson 24 . 7.1 Reformulating the problem The multicommodity ow problem can be formulated as the following integer program: Min W subject to: 8 1 if v = si X X 6 xiv; w , xiw; v = ,1 if v = ti i = 1; : : : ; k; v 2 V; : 0 otherwise w w xiv; w 2 f0; 1g i = 1; : : : ; k; v; w 2 E; X 7 xiv; w W v; w 2 E: Notice that constraint 6 forces the xi to de ne a path perhaps not simple from si to ti. Constraint 7 ensures that every edge has width no greater than W , and the overall integer program minimizes W . We can consider the LP relaxation of this integer program by replacing the constraints xiv; w 2 f0; 1g with xiv; w 0. The resulting linear program can be solved in polynomial time by using interior point methods discussed earlier in the course. The resulting solution may not be integral. For example, consider the multicommodity ow problem with one source and sink, and suppose that there are exactly i edge-disjoint paths between the source and sink. If we weight the edges of each path by 1 i.e. set xv; w = 1 for each edge of each path, then WLP = 1 . The value WLP i i i can be no smaller: since there are i edge-disjoint paths, there is a cut in the graph with i edges. The average ow on these edges will be 1 , so that the width will be at i least 1 . i The fractional solution can be decomposed into paths using ow decomposition, a standard technique from network ow theory. Let x be such that x 0 and 8 a if v = si X X xv; w , xw; v = ,a if v = ti : 0 otherwise. w w Then we can nd paths P1; : : : ; Pl from si to ti such that + 1; : : : ; l 2 R X i = a i X j xv; w: j :v;w2P j i Approx-34 To see why we can do this, suppose we only have one source and one sink, s and t. Look at the residual graph" of x: that is, all edges v; w such that xv; w 0. Find some path P1 from s to t in this graph. Let 1 = minv;w2P xv; w. Set 0v; w = xv; w , 1 v; w 2 P1 x xv; w otherwise. 1 We can now solve the problem recursively with a0 = a , 1. 7.2 The algorithm We now present Raghavan and Thompson's randomized algorithm for this problem. 1. Solve the LP relaxation, yielding WLP . 2. Decompose the fractional solution into paths, yielding paths Pij for i = 1; : : : ; k and jP 1; : : : ; ji where Pij is a path from si to ti, and yielding ij 0 such = that j ij = 1 and X X ij WLP : i j :v;w2P ij 3. Randomization step For all i, cast a ji-faced die with face probabilities the outcome is face f , select path Pif as the path Pi from si to ti. ij . If We will show, using a Cherno bound, that with high probability we will get small congestion. Later we will show how to derandomize this algorithm. To carry out the derandomization it will be important to have a strong handle on the Cherno bound and its derivation. 7.3 Cherno bound success pi . Then, for all For completeness, we include the derivation of a Cherno bound, although it already appears in the randomized algorithms chapter. Lemma 13 Let Xi be independent Bernoulli random variables with probability of Pr "X k i=1 Xi 0 and all t 0, we have k k Y h i Y t e, t E e X = e, t pie + 1 , pi : i i=1 i=1 Proof: Pr "X k i=1 Xi P t = Pr e Approx-35 k i=1 X i e t for any 0. Moreover, this can be written as Pr Y a with Y 0: From Markov's inequality we have Pr Y a E aY for any nonnegative random variable. Thus, h i h P i Pr Pk=1 Xi t e, tE e hX i i = e, t Qk E e X because of independence. i i i=1 i The equality then followsP the de nition of expectation. from Setting t = 1 + E i Xi for some 0 and = ln1 + , we obtain: Corollary 14 Let Xi be independent Bernoulli random variables with probability of success pi , and let M = E Pk=1 X1 = Pk=1 pi . Then, for all 0, we have i i "X " M k k e ,1+ M Y E h1 + X i Pr Xi 1 + M 1 + 1 + 1+ : i=1 i=1 i The second inequality of the corollary follows from the fact that h i E 1 + X = pi 1 + + 1 , pi = 1 + pi e p : i i 7.4 Analysis of the R-T algorithm Theorem 15 Given W W Raghavan and Thompson show the following theorem. 0, if the optimal solution to the multicommodity ow problem has value = log n where n = jV j, then the algorithm produces a p solution of width W W + c W ln n with probability 1 , where c and the constant in log n depends on , see the proof. Proof: Fix an edge v; w 2 E . Edge v; w is used by commodity i with probability pi = Pj:v;w2P ij . Let Xi be a Bernoulli random variable denoting whether or not v; w is in path Pi . Then W v; w = Pk=1 Xi , where W v; w is the width of i edge v; w. Hence, X X X E W v; w = pi = ij WLP W : ij i i j :v;w2P ij Now using the Cherno bound derived earlier, W " e =e Pr W v; w 1 + W 1 + 1+ Approx-36 ,1+ ln1+ W : Assume that 1. Then, one can show that e ,1+ ln1+ e, =3: 1+ = e 1 + Therefore, for s n 3 ln = W " ; we have that " Pr W v; w 1 + W n2 : Notice that our assumption that 1 is met if W 6 ln n , 3 ln ": For this choice of , we derive that s 2 1 + W = W + 3W ln n : " We consider now the maximum congestion. We have X " Pr v;w2E W v; w 1 + W max Pr W v; w 1 + W jE j n2 "; 2 2 v;w2E proving the result. 7.5 Derandomization We will use the method of conditional probabilities. We will need to supplement this technique, however, with an additional trick to carry through the derandomization. This result is due to Raghavan 23 . We can represent the probability space using a decision tree. At the root of the tree we haven't made any decisions. As we descend the tree from the root we represent the choices rst for commodity 1, then for commodity 2, etc. Hence the root has j1 children representing the j1 possible paths for commodity 1. Each of these nodes has j2 children, one for each of the j2 possible paths for commodity 2. We continue in the manner, until we have reached level k. Clearly the leaves of this tree represent all the possible choices of paths for the k commodities. A node at level i the root is at level 0 is labeled by the i choices of paths for commodities 1 : : : i : l1 : : : li. Now we de ne: 2 l1 for commodity 1 3 6 l for commodity 2 7 6 7 7: gl1 : : :li = Pr 6v;w2E W v; w 1 + W ..2 max 6 7 4 5 . li for commodity i Approx-37 By conditioning on the choice of the path for commodity i, we obtain that j X 8 gl1 : : : li,1 = ij g l1; : : : ; li,1 ; j i 9 min gl1; : : : ; li,1; j j j =1 If we could compute gl1; l2; : : : e ciently, we could start from g; and by selecting the minimum at each stage construct a sequence g; gl1 gl1; l2 : : : gl1; l2; : : : ; lk . Unfortunately we don't know how to calculate these quantities. Therefore we need to use an additional trick. Instead of using the exact value g, we shall use a pessimistic estimator for the probability of failure. From the derivation of the Cherno bound and the analysis of the algorithm, we know that " "k 1 + ,1+ W X E Y e X 10 Pr v;w2E W v; w 1 + W max ; v;w i v;w2E i=1 where the superscript on Xi denotes the dependence on the edge v; w, i.e. Xiv;w = 1 if v; w belongs to the path Pi. Letting hl1; : : : ; li be the RHS of 10 when we condition on selecting path Pjl for commodity j , j = 1; : : : ; i, we observe that: 1. hl1; : : : ; li can be easily computed, 2. gl1; : : : ; li hl1; : : : ; li and 3. hl1; : : : ; li minj hl1; : : : ; li,1; j . Therefore, selecting the minimum in the last inequality at each stage, we construct a sequence such that 1 " h; hl1 hl1; l2 : : : hl1; l2; : : : ; lk gl1; l2; : : : ; lk . Since gl1; l2; : : : ; lk is either 0 or 1 there is no randomness involved, we must have that the choice of paths of this deterministic algorithm gives a maximum congestion less than 1 + W . j 8 Multicommodity Flow Consider an undirected graph G = V; E with a capacity ue on each edge. Suppose that we are given k commodities and a demand for fi units of commodity i between two points si and ti. In the area of multicommodity ow, one is interested in knowing whether all commodities can be shipped simultaneously. That is, can we nd ows of value fi between si and ti such that the sum over all commodities of the ow on each edge in either direction is at most the capacity of the edge. There are several variations of the problem. Here, we consider the concurrent ow problem: Find where is the maximum such that for each commodity we can Approx-38 ship fi units from si to ti. This problem can be solved by linear programming since all the constraints are linear. Indeed, one can have a ow variable for each edge and each commodity in addition to the variable , and the constraints consist of the ow conservation constraints for each commodity as well as a capacity constraint for every edge. An example is shown in gure 8. The demand for each commodity is 1 3 unit and the capacity on each edge is 1 unit. It can be shown that = 4 . t 4 t3 s 1 t1 s 2 t2 s 3 f i =1 ue=1 s 4 Figure 10: An example of the multi-commodity ow problem. When there is only one commodity, we know that the maximum ow value is equal to the minimum cut value. Let us investigate whether there is such an analogue for multicommodity ow. Consider a cut S; S . As usual S is the set of edges with exactly one endpoint in S . Let, X f S = fi: Since all ow between S and S must pass along one of the edges in S we must have, u S f S P where u S = e2 S ue. The multicommodity cut problem is to nd a set S which minimizes uf SS . We let be the minimum value attainable and so we have . But, in general, we don't have equality. For example, in Figure 8, we have = 1. In fact, it can be shown that the multicommodity cut problem is NP-hard. We shall consider the following two related questions. 1. In the worst case, how large can be? 2. Can we obtain an approximation algorithm for the multicommodity cut problem? Approx-39 i:jS fs ;t gj=1 i i De nition 9 X; d can be embedded into Y; ` if there exists a mapping ' : X ! Y which satis es 8x; y : `'x; 'y = dx; y. De nition 10 X; d can be embedded into Y; ` with distortion c if there exists a mapping ' : X ! Y which satis es 8x; y : dx; y `'x; 'y cdx; y. In special cases, answers have been given to these questions by Leighton and Rao 19 and in subsequent work by many other authors. In this section, we describe a very recent, elegant and general answer due to Linial, London and Rabinovitch 20 . The technique they used is the embedding of metrics. The application to multicommodity ows was also independently obtained by Aumann and Rabani 3 . We rst describe some background material on metrics and their embeddings. De nition 8 X; d is a metric space or d is a metric on a set X if 1. 8x; y : dx; y 0. 2. 8x; y : dx; y = dy; x. 3. 8x; y; z : dx; y + dy; z dx; z. Strictly speaking we have de ned a semi-metric since we do not have the condition dx; y = 0 x = y. We will be dealing mostly with nite metric spaces, where X is nite. In Rn then the following are all metrics: q dx; y = jjx , yjj2 = PPxi , yi2 `2 metric dx; y = jjx , yjj1 = jxi , yij `1 metric dx; y = jjx , yjj1 = maxi jxi , yij `1 metric dx; y = jjx , yjj = P jx , y jp ` metric p i i 1 p p The following are very natural and central questions regarding the embedding of metrics. ISIT-`p: Given a nite metric space X; d, can it be embedded into Rn; `p for some n? EMBED-`p : Given a nite metric space X; d and c 1, nd an embedding of X; d into Rn; `p with distortion at most c for some n. As we will see in the following theorems, the complexity of these questions depend critically on the metric themselves. Theorem 16 Any X; d can be embedded into Rn; `1 where n = jX j. Thus, the answer to ISIT-`1 is always yes". Proof: We de ne a coordinate for each point z 2 X . Let dx; z be the z coordinate of x. Then, `1'x; 'y = max jdx; z , dy; zj = dx; y; z2X because of the triangle inequality. Approx-40 ...
View Full Document

This note was uploaded on 02/13/2012 for the course CSE 4101 taught by Professor Mirzaian during the Winter '12 term at York University.

Ask a homework question - tutors are online