divide and conquer-patterns

# If the justication of our current guess fails then it

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: what a closed-form solution of the recurrence equation might look like and then justifying that guess, usually by induction. For example, we can use the guess-and-test method as a kind of “binary search” for ﬁnding good upper bounds on a given recurrence equation. If the justiﬁcation of our current guess fails, then it is possible that we need to use a faster-growing function, and if our current guess is justiﬁed “too easily,” then it is possible that we need to use a slower-growing function. However, using this technique requires our being careful, in each mathematical step we take, in trying to justify that a certain hypothesis holds with respect to our current “guess.” We explore an application of the guess-and-test method in the examples that follow. Example 5.4: Consider the following recurrence equation (assuming the base case T (n) = b for n < 2): T (n) = 2T (n/2) + bn log n. This looks very similar to the recurrence equation for the merge-sort routine, so we might make the following as our ﬁrst guess: First guess: T (n) ≤ cn log n, for some constant c > 0. We can certainly choose c large enough to make this true for the base case, so consider the case when n ≥ 2. If we assume our ﬁrst guess is an inductive hypothesis that is true for input sizes smaller than n, then we have T (n) = 2T (n/2) + bn log n ≤ 2(c(n/2) log (n/2)) + bn log n = cn(log n − log 2) + bn log n = cn log n − cn + bn log n. But there is no way that we can make this last line less than or equal to cn log n for n ≥ 2. Thus, this ﬁrst guess was not sufﬁcient. Let us therefore try Better guess: T (n) ≤ cn log2 n, for some constant c > 0. We can again choose c large enough to make this true for the base case, so consider the case when n ≥ 2. If we assume this guess as an inductive hypothesis that is true for input sizes smaller then n, then we have T (n) = 2T (n/2) + bn log n ≤ 2(c(n/2) log 2 (n/2)) + bn log n = cn(log2 n − 2 log n + 1) + bn log n = cn log2 n − 2cn log n + cn + bn log n ≤ cn log2 n, provided c ≥ b. Thus, we have shown that T (n) is indeed O(n log2 n) in this case. 5.2. Divide-and-Conquer 267 We must take care in using this method. Just because one inductive hypothesis for T (n) does not work, that does not necessarily imply that another one proportional to this one will not work. Example 5.5: Consider the following recurrence equation (assuming the base case T (n) = b for n < 2): T (n) = 2T (n/2) + log n. This recurrence is the running time for the bottom-up heap construction discussed in Section 2.4.4, which we have shown is O(n). Nevertheless, if we try to prove this fact with the most straightforward inductive hypothesis, we will run into some difﬁculties. In particular, consider the following: First guess: T (n) ≤ cn, for some constant c > 0. We can choose c large enough to make this true for the base case, certainly, so consider the case when n ≥ 2. If we assume this guess as an inductive hypothesis that is true for input sizes smaller than n, then we have T (n) = 2T (n/2) + log n ≤ 2(c(n/2)) + log n = cn + log n. But there is no way that we can make this last line less than or equal to cn for n ≥ 2. Thus, this ﬁrst guess was not sufﬁcient, even though T (n) is indeed O(n). Still, we can show this fact is true by using Better guess: T (n) ≤ c(n − log n), for some constant c > 0. We can again choose c large enough to make this true for the base case; in fact, we can show that it is true any time n < 8. So consider the case when n ≥ 8. If we assume this guess as an inductive hypothesis that is true for input sizes smaller than n, then we have T (n) = ≤ = = ≤ 2T (n/2) + log n 2c((n/2) − log (n/2)) + log n cn − 2c log n + 2c + log n c(n − log n) − c log n + 2c + log n c(n − log n), provided c ≥ 3 and n ≥ 8. Thus, we have shown that T (n) is indeed O(n) in this case. The guess-and-test method can be used to establish either an upper or lower bound for the asymptotic complexity of a recurrence equation. Even so, as the above example demonstrates, it requires that we have developed some skill with mathematical induction. Chapter 5. Fundamental Techniques 268 The Master Method Each of the methods described above for solving recurrence equations is ad hoc and requires mathematical sophistication in order to be used effectively. There is, nevertheless, one method for solving divide-and-conquer recurrence equations that is quite general and does not require explicit use of induction to apply correctly. It is the master method. The master method is a “cook-book” method for determining the asymptotic characterization of a wide variety of recurrence equations. Namely, it is used for recurrence equations of the form T (n) = c if n < d aT (n/b) + f (n) if n ≥ d , where d ≥ 1 is an integer constant, a > 0, c > 0, and b > 1 are real constants, and f (n) is a function that is positive for n ≥ d . Such a recurrence equation would arise i...
View Full Document

## This document was uploaded on 03/26/2014.

Ask a homework question - tutors are online