This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 16 Maximum Satisﬁability The maximum satisﬁability problem has been a classical problem in approx
imation algorithms. More recently, its study has led to crucial insights in
the area of hardness of approximation [see Chapter 29). In this chapter, we
will use LProunding, with randomization, to obtain a 3/4 factor approxi mation algorithm. We will derandomizc this algorithm using the method of
conditional expectation. Problem 16.1 (Maximum satisﬁability (MAX—SAT” Given a con
junctive normal form formula f on Boolean variables :::1, . . . , an, and non—
negative weights, me, for each clause 0 of f, find a truth assignment to the
Boolean variables that maximizes the total weight of satisﬁed clauses. Let C
represent the set of clauses of f, i.e., f = And} c. Each clause is a disjunction
of literals; each literal being either a Boolean variable or its negation. Let
sizo(c) denote the size of clause e, i.e., the number of literals in it. We will
assume that the sizes of clauses in f are arbitrary. For any positive integer k, we will denote by M A XkSAT the restriction
of MAXSAT to instances in which each clause is of size at most is. MAX
SAT is N Phard; in fact, even MAX28AT is NPhard (in contrast, ZSAT
is in P). We will ﬁrst present two approximation algorithms for MAX—SAT,
having guarantees of 1/2 and 1 1 fa, respectively. The first performs better if
the clause sizes are large, and the seconds performs better if they are small.
We will then show how an appropriate combination of the two algorithms
achieves the promised approximation guarantee. In the interest of mininﬂzing notation, let us introduce common terminol
ogy for all three algorithms. Random variable W will denote the total weight.
of satisfied clauses. For each clause c, random variable l/Vc denotes the weight.
contributed by clause at to W. Thus, W ; Zed} WC and E[Wc] 2 inc  Pr[c is satisﬁed]. . (Strictly speaking, this is abuse of notation, since the randomization used by
the three algorithms is different.) 132 16 Maximum Satisﬁability
16.1 Dealing with large clauses The first algorithm is straightforward. Set each Boolean variable to be True
independently with probability 1/2 and output the resulting truth assign
ment, say T. For k _>_ 1, deﬁne at — l —2' k. Lemma 16.2 If sizo(c) : k, then E[W¢] ; (Ik‘mc. Proof: Clause e is not satisﬁed by T iff all its literals are set to False. The
probability of this event is 2 "7. III For It 2 l, at 2 1/2. By linearity of expectation. 1 1 _
E[W] = Z awn :_> iZwt 2 50131",
CEC ‘ CEC where we have used a trivial upper bound on OPT the total weight of
claimes in C. Instead of converting this into a high probability statement, with a. cor
responding lose in guarantee, we show how to derandomize this procedure
The resulting algorithm deterministieally computes; a truth assigmnent such
that the weight of satisﬁed clauses is _>_ E[W] 2 OPT/2. Observe that out increases with h; and the guarantee of this algorithm is 3/ 4 if each clause has two or more literals. (The next algorithm is designed
to deal with unit clauses more effectively.) 16.2 Derandomizing via the method of conditional
expectation We will critically use the sellreducibility of SAT (see Section A5). Consider
the selilreducibility tree T for formula ft Each internal node at level '5 corre
sponds to a setting for Boolean variables :31, . . . , 331;, and each leaf represents
a corriplete truth assigmnent to the n variables. Let us label each node of
7‘ with its conditional expectation as follows. Let a1,...,a, be a truth as
signment to 2:1, . . . ,1}. The node corresponding to this assignment will be
labeled with E[W[:rl = a1, . . ‘ ,m = oi]. Ift' : n, this is a leaf node and its
conditional expectation is simply the total weight of clauses satisﬁed by its
truth assignment. Lemma 16.3 The conditional expectation of any node in T can be competed
in polynomial time. Proof: Consider a node at = o1, . . . ,mi — at. Let (b be the Boolean formula,
on variables n+1, . . ., an,“ obtained for this node via self—reducibility. Clearly, 16.2 Derandomizing via the method of conditional expectation 133 the expected weight of satisﬁed clauses of I!) under a random truth assignment
to the variables m: H,. ,:1:,, can be computed in polynomial time Adding to
this the total weight of clauses of f already satisiied by the partial assigmncnt
:1:1:c,,...,:1:—c,g1vestheanswerEl Theorem 16.4 We can compete, in polynomial time, a path. from the root to a. leaf such that the conditional expectation of each. node on this path. is
2 ElW]. Proof: The conditional expectation of a node is the average of the condi—
tional expectations of its two children, i.e., ElWle = (1.1," 211,] =ElWlQ‘1 —— c1,...,:12, =' 04,335.14 = T1110“? *l'
EIWICB} :. (11, "”131: = can“ = Falsel/Z The reason, of course, is that n+1 is equally likely to be set to True or False.
As a result, the child with the larger value has a conditional expectation at
least as large as that. of the parent. This establishes the existence of the desired
path. As a consequence of Lemma 16.3, it can be computed in polynomial
time. D The deterministic algorithm follows as a corollary of Theorem 16.4. We
simply output the truth assignment on the leaf node of the path computed.
The total weight of clauses satisﬁed by it is 2 E[W]. Let us show that the technique outlined above can, in principle, be used to
derandomize more complex randomized algorithms. Suppose the algorithm
does not set the Boolean variables independently of each other (for instance,
see Remark 16.6). Now, ElWIIEi = a}, ...,.’II{ = as] =
ElWlm1 = a1, ...,:1:,, = chm,“ = True]  Pr[:r.;+1 ='I‘rue:1:1 = (1.1. ...,:1:, _— a.,l+ ElWlw; = 111,...,:r¢ = one,“ = False] Pr[m,+1 = Falselm = 0.1, ...,:c, = (1,]. The sum of the two conditional probabilities is again 1, since the two
events are exhaustive. So, the conditional expectation of the parent is still
a convex combination of the conditional expectations of the twochildren. If
we can determine, in polynomial time, which of the two children has a larger
value, we can again deranclomize the algorithm. However, computing the con—
clitional expectations may not be 'easy. Observe how critically independence
was used in the proof of Lemma 16.3. it was because of independence that we
could assume a random truth assignment on Boolean variables $14.1, . . . , 33,,
and thereby compute the expected weight of satisﬁed clauses of qt. In general, a randomized algorithm may pick from a larger set of choices
and not necessarin with equal probability. But once again a conVeX combina
tion of the conditional eXpectations of these choices, given by the probabilities 134 16 Maximum Satisﬁability of picking them, equals the conditional expectation of the parent. Hence there
must be a choice that has at least as large a conditional expectation as the
parent" ...
View
Full Document
 Fall '10
 Staff
 Algorithms

Click to edit the document details