This preview shows page 1. Sign up to view the full content.
Unformatted text preview: n the analysis of a divide-and-conquer algorithm that divides a given problem into
a subproblems of size at most n/b each, solves each subproblem recursively, and
then “merges” the subproblem solutions into a solution to the entire problem. The
function f (n), in this equation, denotes the total additional time needed to divide
the problem into subproblems and merge the subproblem solutions into a solution to
the entire problem. Each of the recurrence equations given above uses this form, as
do each of the recurrence equations used to analyze divide-and-conquer algorithms
given earlier in this book. Thus, it is indeed a general form for divide-and-conquer
The master method for solving such recurrence equations involves simply writing down the answer based on whether one of the three cases applies. Each case is
distinguished by comparing f (n) to the special function nlogb a (we will show later
why this special function is so important).
Theorem 5.6 [The Master Theorem]: Let f (n) and T (n) be deﬁned as above.
1. If there is a small constant ε > 0, such that f (n) is O(nlogb a−ε ), then T (n) is
Θ(nlogb a ).
2. If there is a constant k ≥ 0, such that f (n) is Θ(nlogb a logk n), then T (n) is
Θ(nlogb a logk+1 n).
3. If there are small constants ε > 0 and δ < 1, such that f (n) is Ω(nlogb a+ε )
and a f (n/b) ≤ δ f (n), for n ≥ d , then T (n) is Θ( f (n)).
Case 1 characterizes the situation where f (n) is polynomially smaller than the
special function, nlogb a . Case 2 characterizes the situation when f (n) is asymptotically close to the special function, and Case 3 characterizes the situation when f (n)
is polynomially larger than the special function. 5.2. Divide-and-Conquer 269 We illustrate the usage of the master method with a few examples (with each
taking the assumption that T (n) = c for n < d , for constants c ≥ 1 and d ≥ 1).
Example 5.7: Consider the recurrence
T (n) = 4T (n/2) + n. In this case,
= n2 . Thus, we are in Case 1, for f (n) is O(n2−ε ) for
ε = 1. This means that T (n) is Θ(n2 ) by the master method.
nlogb a nlog2 4 Example 5.8: Consider the recurrence
T (n) = 2T (n/2) + n log n, which is one of the recurrences given above. In this case, nlogb a = nlog2 2 = n.
Thus, we are in Case 2, with k = 1, for f (n) is Θ(n log n). This means that T (n) is
Θ(n log2 n) by the master method.
Example 5.9: Consider the recurrence
T (n) = T (n/3) + n, which is the recurrence for a geometrically decreasing summation that starts with n.
In this case, nlogb a = nlog3 1 = n0 = 1. Thus, we are in Case 3, for f (n) is Ω(n0+ε ),
for ε = 1, and a f (n/b) = n/3 = (1/3) f (n). This means that T (n) is Θ(n) by the
Example 5.10: Consider the recurrence
T (n) = 9T (n/3) + n2.5 . In this case, nlogb a = nlog3 9 = n2 . Thus, we are in Case 3, since f (n) is Ω(n2+ε )
(for ε = 1/2) and a f (n/b) = 9(n/3)2.5 = (1/3)1/2 f (n). This means that T (n) is
Θ(n2.5 ) by the master method.
Example 5.11: Finally, consider the recurrence
T (n) = 2T (n1/2 ) + log n. Unfortunately, this equation is not in a form that allows us to use the master method.
We can put it into such a form, however, by introducing the variable k = log n,
which lets us write
T (n) = T (2k ) = 2T (2k/2 ) + k. Substituting into this the equation S(k) = T (2k ), we get that
S(k) = 2S(k/2) + k. Now, this recurrence equation allows us to use master method, which speciﬁes that
S(k) is O(k log k). Substituting back for T (n) implies T (n) is O(log n log log n).
Rather than rigorously prove Theorem 5.6, we instead discuss the justiﬁcation
behind the master method at a high level. Chapter 5. Fundamental Techniques 270 If we apply the iterative substitution method to the general divide-and-conquer
recurrence equation, we get
T (n) = aT (n/b) + f (n)
= a(aT (n/b2 ) + f (n/b)) + f (n) = a2 T (n/b2 ) + a f (n/b) + f (n)
= a3 T (n/b3 ) + a2 f (n/b2 ) + a f (n/b) + f (n)
= alogb n T (1) +
= nlogb a T (1) + logb n−1 ∑ ai f (n/bi ) ∑ ai f (n/bi ), i=0
i=0 where the last substitution is based on the identity alogb n = nlogb a . Indeed, this
equation is where the special function comes from. Given this closed-form characterization of T (n), we can intuitively see how each of the three cases is derived.
Case 1 comes from the situation when f (n) is small and the ﬁrst term above dominates. Case 2 denotes the situation when each of the terms in the above summation
is proportional to the others, so the characterization of T (n) is f (n) times a logarithmic factor. Finally, Case 3 denotes the situation when the ﬁrst term is smaller
than the second and the summation above is a sum of geometrically decreasing
terms that start with f (n); hence, T (n) is itself proportional to f (n).
The proof of Theorem 5.6 formalizes this intuition, but instead of giving the
details of this proof, we present two applications of the master method below. 5.2.2 Integer Multiplication
We consider, in...
View Full Document
- Spring '14