Advanced Algorithms 1.9

Advanced Algorithms 1.9 - However, the number of paths...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: However, the number of paths crossing S should be at least jS j jS j. This implies that jS j jS j E of paths through S = 2d,1 j S j: Since jS j jV j , jS j jV j = 2d,1. Therefore, for any set S , 2 2 j S j jSj 1: jS j 2d,1 So, G 1. This gives us the conductance of the corresponding MC: = p G = 21d . Then 1 2 1 , 22 = 1 , 8d2 . The steady-state distribution is j = 21d for all j . Thus, the relative error after t steps is ptij , j t2 2d 1 , 1 t : = max min i;j 8d2 j j j If we want , we shall choose t such that: d ln 2 t , ln1 , ln1 , 8d2 2 d ln 2 , ln : 8d In this case, although the MC has an exponential number of states 2d, we only need Od3 steps to generate an almost uniform state with say constant or even as small as e,Od. In general, let M be an ergodic, time reversible Markov chain with eqn states, where qn is a polynomial in n n represents the size of the input. If its conductance p1n , where pn is a polynomial in n, we will say that it has the rapidly mixing property. The relative error after t steps is t2 1 , 22 t ; t minj j eqn To get t , we only need to take t = 2p2n qn + ln 1 steps, a polynomial number in n and ln 1 . Thus a rapidly-mixing MC with uniform stationary distribution with state space M can be used as an -sampling scheme on M : De nition 5 A fully polynomial -sampling scheme also called an -generator for a set M is an algorithm that runs in time polysize of input, ln 1 , and outputs an element x 2 M with probability x such that 1 max x , jM j jM j : x2M Random-21 M is typically given implicitly by a relation i.e. on input x, M is the set of strings y satisfying some relation x; y . 8 Approximation by sampling We now sketch how an -sampling scheme can be used to develop a randomized approximation algorithm for the counting problem we discussed before. To evaluate jM j, rst we immerse M into a larger set V M . Then we sample from V , and approximate jjMjj by V size of M sample : size of sample This scheme works well if jM j is polynomially comparable to jV j. But if jM j jV j i.e. jM j exponentially smaller than jV j, this scheme will have trouble, since in order to obtain a small relative error in the approximation, the number of samples will need to be so large i.e. exponential as to make this approach infeasible. See our previous discussion for the problem of counting individuals, and our study of Cherno bounds. Example: Suppose we wish to approximate . If we take a square with sidelength 2 and we inscribe within it a circle of radius 1, then the ratio of the area of the circle to the area of the square is =4. Thus the probability that a uniformly generated point in the square belongs to the circle is precisely =4. Figure 8: How not to calculate . By generating points within the square at random according to a uniform distribution, we can approximate as simply 4 times the fraction of points that lie within the circle. The accuracy of such an approximation depends on how closely we can approximate the uniform distribution and on the number of samples. However, we will run into trouble if we want to estimate volBn by using the same method, where Bn = fx 2 Rn : jjxjj 1g, since the volume is exponentially smaller than the volume of the corresponding cube 2n . Nevertheless, a very nice application of rapidly mixing markov chains is precisely in the computation of the volume of a convex body or region 6 . To avoid the problem just mentioned, what is done is to immerse the body whose volume V0 needs to be computed in a sequence of bodies of volumes V1, Random-22 V2, : : : , such that Vi =Vi+1 is is polynomially bounded. Then one can evaluate V0 by the formula: V0 = V0 V1 V2 Vn,1 : Vn V1 V2 V3 Vn We now show how this technique can be used to develop a fully polynomial randomized approximation scheme for computing the permanent of a class of 0-1 matrices. 9 Approximating the permanent Recall that for an n n 0-1 matrix A, the permanent of A, permA, is the number of perfect matchings in the bipartite graph G whose incidence matrix is A. It is known that computing permA is P-complete. In order to develop an approximation scheme for the permanent, we use the technique of approximation by sampling. As a naive adoption of this technique, we can generate edge sets at random and count the fraction that are perfect matchings. Unfortunately, this scheme may resemble searching for a needle in a haystack. If the fraction of edge sets that are perfect matchings is very small, then in order to obtain a small relative error in the approximation, the relative error in the sampling may need to be so small and the number of samples may need to be so large as to make this approach infeasible. Instead of trying to directly approximate the fraction of edge sets that are perfect matchings, we can try to approximate a di erent ratio from which the permanent can be computed. Speci cally, for k = 1; 2; : : : ; n, let Mk denote the set of matchings with size k, and let mk = jMk j denote the number of matchings with size k. The permanent of A is then given by permA = mn, and we can express permA as the product of ratios: permA = mn mn,1 m2 m1 7 m m m m1 = jE j. Thus, we can approximate the permanent of A by approximating the ratios mk =mk,1 for k = 2; 3; : : : ; n. We write mk =mk,1 as mk = uk , 1 8 m m k,1 k,1 n,1 n,2 1 where uk = jUk j and Uk = Mk Mk,1 see Figure 9, and then we use an -sampling scheme for Uk to approximate the fraction mk,1 =uk . To summarize our approach, for each k = 2; 3; : : : ; n, we take random samples over a uniform distribution on the set of matchings of size k and k , 1, and we count the fraction that are matchings of size k , 1; this gives us mk,1=uk , and we use Equation 8 to get mk =mk,1. Equation 7 then gets us permA. The following two claims establish the connection between -sampling of Uk and approximation of the permanent of A. Random-23 Uk Mk-1 Mk Figure 9: Each matching in Uk has size either k or k , 1. Claim 18 Broder If there is a fully polynomial -sampling scheme for Uk = Mk Mk,1, then there exists a randomized approximation scheme for mk =mk,1 that runs in time that is polynomial in the values 1= , n, and uk =mk . Claim 19 If for each k = 2; 3; : : : ; n, there is a fully polynomial -sampling scheme for Uk , then there exists a randomized approximation scheme for the permanent of A that runs in time that is polynomial in the values 1= , n, and maxk uk =mk . For those graphs with maxk uk =mk bounded by a polynomial in n, Claim 19 gives us a fully polynomial randomized approximation scheme for the permanent | provided, of course, that we can produce an -sampling scheme for Uk . In fact, it turns out that u u max mk = mn : k k n This is because fmk g is log-concave i.e. mk mk+2 m2+1. Thus, if we can develop k an -sampling scheme for the matchings Uk for each k = 2; 3; : : : ; n, then for the class of graphs with un=mn bounded by a polynomial in n, we have an fpras for the permanent. After developing an -sampling scheme, we will look at such a class of graphs. An -sampling scheme for matchings We now turn our attention to developing an -sampling scheme for Uk = Mk Mk,1, and it should come as no surprise that we will use the technique of rapidly mixing Markov chains. We now de ne a Markov chain whose states are matchings in Uk . Consider any pair Mi; Mj of states matchings and create a transition between them according to the following rules: If Mi and Mj di er by the addition or removal of a single edge, that is, Mi 4Mj = feg for some edge e, then there is a transition from Mi to Mj and a transition Random-24 Finally, this Markov chain also has the property that pij = p = 1=2m for every transition with i 6= j , and this property allows us to compute by: = 21 H m where we recall that H is the magni cation of the underlying graph H not the graph G on which we are sampling matchings. If we could now lower bound H by H 1=pn where pn is a polynomial in n, then since m n2, we would have 1=p0 n p0 n is also a polynomial in n, and so we would have a fully polynomial -sampling scheme for Uk . We cannot actually show such a lower bound, but the following theorem gives us a lower bound of H mk =cuk c is a constant, and this, by Claim 19, is su cient to give us a randomized approximation scheme that is polynomial in 1= , n, and un=mn. Theorem 20 Dagum, Luby, Mihail and Vazirani 5 There exists a constant c such that H 1 mk : c uk Corollary0 21 There exists a fully polynomial -sampling scheme for Uk provided that uk c 0 mk = On for some c . Random-25 from Mj to Mi . Both transitions have probability pij = pji = 1=2m where m denotes the number of edges in the graph. Mi4Mj = Mi , Mj Mj , Mi If Mi and Mj are both matchings of size k , 1 and they di er by removing one edge and adding another edge that has an endpoint in common with the removed edge, that is, Mi; Mj 2 Mk,1 and Mi4Mj = fu; v; v; wg for some pair of edges u; v and v; w, then there is a transition from Mi to Mj and a transition from Mj to Mi. Both transitions have probability pij = pji = 1=2m. To complete the Markov chain, for each state Mi, we add a loop transition from Mi to itself with probability pii set to ensure that Pj pij = 1. It is easy to see that this Markov chain is ergodic since the self-loops imply aperiodicity, and irreducibility can be seen from the construction. Irreducibility implicitly assumes the existence of some matching of size k; otherwise the chain might not be irreducible. The proof of irreducibility indeed stems on the fact that any matching of size k can reach any matching of size k , 1, implying that one can reach any matching from any other matching provided a matching of size k exists. This Markov chain is time reversible and has the desired uniform steady state probability distribution, i = 1=uk , since it is clearly symmetric. Furthermore, pii 1=2 for each state Mi, and therefore, max = 2 which means that we can bound the relative error after t steps by: ! 2 t : t uk 1 , 2 A preliminary lemma is required. Lemma 22 Let M1; M2 be matchings in a bipartite graph. Then M14M2 is a union of vertex disjoint paths and cycles which alternate between the edges of M1 and M2 . Sketch of the proof of main result: For each pair of states M1; M2 with M1 2 Mk,1 and M2 2 Mk , we pick a random path from M1 to M2 in H as follows. By lemma 22 we can write the symmetric di erence of the matchings M1 and M2 as M14M2 = C D E where each element of C denotes a cycle or path in G with the same number of edges from M1 as from M2, each element of D denotes a path in G with one more edge from M2 than from M1, and each element of E denotes a path in G with one more edge from M1 than from M2. Notice that there must be exactly one more element in D than in E . We order the paths in each set at random so C1; C2; : : : ; Cr is a random permutation of the r paths or cycles in C , D1; D2; : : : ; Dq is a random permutation of the q paths in D, and E1; E2; : : : ; Eq,1 is a random permutation of the q , 1 paths in E . A path from M1 to M2 in H is then given by: M1 D HH 1 M 00 = M 0 4E1 E1 . . . D HH q HH j 0 M = M14D1 HH j M 000 C ?1 . . . C ?r M2 Of course, M1 ! M14D1 may not actually be a transition of the chain, but M1 ! M14D1 does de ne a path of transitions if we use the edges of D1 two at a time. The crucial part of this proof is showing that there exists a constant c such that for any edge e of H , the expected number of paths that go through e is upper bounded by E number of paths through e cmk,1: Random-26 We will not do this part of the proof. Now if we consider any set S V of vertices from H , by linearity of expectation, the expected number of paths that cross from S over to S is upper bounded by E number of paths crossing S cmk,1 j S j where we recall that S denotes the coboundary of S . Therefore, there exists some choice for the paths such that not more than cmk,1 j S j of them cross the boundary of S . S Mk-1 Mk S Figure 10: Partitioning Uk . Each vertex of S is either a matching of Mk or a matching of Mk,1 and likewise for S , so we can partition Uk as shown in Figure 10. We assume, without loss of generality, that 9 and therefore, 10 jS Mk j mk ; jS j uk S Mk,1 mk,1 u k S otherwise, we exchange S and S . The number of paths that cross S must be at least jS Mk jjS Mk,1j since for any M1 2 S Mk,1, M2 2 S Mk there must be a path from M1 to M2 which crosses from S to S . By multiplying together Inequalities 9 and 10, we have jS Mk j S Mk,1 mk mk,1 jS j S : u2 k Random-27 But we have already seen that we can choose paths so that the number of paths crossing the boundary of S is not more than cmk,1 j S j. Therefore, it must be the case that mk mk,1 jS j S cmk,1 j S j : u2 k Notice that this statement is unchanged if we replace S by S . So, without loss of uk which implies that S uk : Hence generality jS j 2 2 mk jS j S mk jS j u2 uk 2 k which implies that j S j 1 mk jS j 2c uk 1 H 2c mk : u k A class of graphs with un =mn polynomially bounded We nish this discussion by considering a class of graphs for which un=mn is bounded by a polynomial in n. Speci cally, we consider the class of dense bipartite graphs. A dense bipartite graph is a bipartite graph in which every vertex has degree at least n=2 recall that n is number of vertices on each side of the bipartition. We now show that for dense bipartite graphs, mn,1=mn n2. Since un =mn = 1+ mn,1 =mn , this bound gives us the desired result. Consider a matching M 2 Mn,1 with edges fu1; v1; u2; v2; : : : ; un,1; vn,1g so that un and vn are the two exposed vertices. Since both un and vn have degree at least n=2, there are the following two possibilities. un; vn is an edge which implies that M 0 := M fun; vng 2 Mn: There exists an i for 1 i n , 1 such that both un ; vi and ui; vn are edges this follows from the pigeonhole principle. In this case M 0 := M , fui; vig fun; vi; ui; vng 2 Mn: Thus we can de ne a function f : Mn,1 ! Mn by letting f M = M 0: Now consider a matching M 0 2 Mn, and let f ,1M 0 denote the set of matchings M 2 Mn,1 such that f M = M 0. For each M 2 Mn,1 such that f M = M 0, M can be obtained from M 0 in one of two di erent ways. Some pair of edges ui; vi; uj ; vj are removed from M 0 and replaced with a single edge that must be either ui; vj or uj ; vi. We can choose the pair of edges to remove in n ways and we can choose the replacement edge in at most 2 2 ways. Random-28 An edge ui; vi is removed from M 0 . This edge can be chosen in n ways. Thus, there are at most ! 2 n + n = nn , 1 + n = n2 2 matchings in Mn,1 that could possibly map to M 0. This means that jf ,1 M 0j n2 for every matching M 0 2 Mn, and therefore, mn,1=mn n2. Thus we have shown that for dense bipartite graphs, there exists a fully polynomial randomized approximation scheme for the permanent. This result is tight in the sense n, that graphs can be constructed such that mmn 1 is exponential and whose vertices have degree n , : There is also a theorem of Broder which says that 2 Theorem 23 Counting perfect matchings on dense bipartite graphs is P-complete. Other applications of Markov Chains There are other uses for Markov Chains in the design of algorithms. Of these the most interesting is for computing the volume of convex bodies. It can be shown that a fully polynomial randomized approximation scheme may be constructed for this problem 6 . This is interesting because it can also be shown that one cannot even approximate the volume to within an exponential factor of ncn for c 0:5 in polynomial time, where n is the dimension 3 . References 1 N. Alon. Eigenvalues and expanders. Combinatorica, 6:83 96, 1986. 2 N. Alon and V. Milman. 1, isoperimetric inequalities for graphs and superconcentrators. Journal of Combinatorial Theory B, 38:73 88, 1985. 3 I. B r ny and Z. Furedi. Computing the volume is di cult. In Proceedings of the aa 18th Annual ACM Symposium on Theory of Computing, pages 442 447, 1986. 4 J. Cheeger. A lower bound for the smallest value of the Laplacian. In R. Gunning, editor, Problems in analysis, pages 195 199. Princeton University Press, 1970. 5 P. Dagum, M. Mihail, M. Luby, and U. Vazirani. Polytopes, permanents and graphs with large factors. In Proceedings of the 29th Annual Symposium on Foundations of Computer Science, pages 412 422, 1988. 6 M. Dyer, A. Frieze, and R. Kannan. A random polynomial algorithm for approximating the volume of convex bodies. Journal of the ACM, pages 1 17, 1991. Random-29 7 R. Karp. An introduction to randomized algorithms. Discrete Applied Mathematics, 34:165 201, 1991. 8 R. M. Karp and M. O. Rabin. E cient randomized pattern-matching algorithms. IBM Journal of Research and Development, 31:249 260, 1987. 9 R. Motwani and P. Raghavan. Randomized Algorithms. 1994. 10 K. Mulmuley, U. Vazirani, and V. Vazirani. Matching is as easy as matrix inversion. Combinatorica, 71:105 113, 1987. 11 A. Sinclair and M. Jerrum. Approximate counting, uniform generation and rapidly mixing markov chains. Information and Computation, 82:93 133, 1989. Random-30 ...
View Full Document

This note was uploaded on 02/13/2012 for the course CSE 4101 taught by Professor Mirzaian during the Winter '12 term at York University.

Ask a homework question - tutors are online