COP 3503 – Computer Science II
–
CLASS NOTES

DAY #9
Upper Bounds for Divide and Conquer Algorithm Running Times
The analysis of the divide and conquer algorithm to solve the MCSS problem
illustrated that a problem divided into two parts, each solved recursively, with an
O(N) overhead (the linear part of case 3) results in an O(N log
2
N) running time.
Our analysis was based upon the fact that the value of N we selected was a
multiple of 2.
If the value of N hadn’t been a multiple of two, our analysis
technique wouldn't have worked.
In this section we’ll see how to determine the
running time of a general divide and conquer algorithm where N isn’t necessarily a
multiple of two.
Our analysis needs three parameters:
•
A
– the number of subproblems.
•
B
– the relative size of the subproblems (if B=2 then the subproblems are half
sized, B=3 implies 1/3 sized subproblems, and so on).
•
k
– a term representing the overhead which is
θ
(N
k
).
In general, our timing equation is T(N) =
A
T(N/B) + O(N
k
), where
A
≥
1, B > 1.
The solution to this equation is (proof on pgs 205207 in the textbook):
O(N
log
B
A
)
if A>B
k
– solution 1
T(N) =
O(N
k
log
2
N) if A=B
k
– solution 2
O(N
k
) if A<B
k
– solution 3
Day 9 
1
Example –
MCSS Divide and Conquer Algorithm
In our divide and conquer algorithm to solve the MCSS problem we have the
following values for the parameters in our timing equation:
A
= 2 {since the problem was divided into two subproblems}
B
= 2 {since the two subproblems were halfsized}
k
= 1 {since we had linear overhead so O(N
1
)}
Solution 2 applies as the value of T(N) here since
A = B
k
= 2 = 2
1
.
Therefore the
divide an conquer solution to the MCSS problem has a running time of:
T(N) = O(N
1
log
2
N) = O(N log
2
N)
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentIf the original MCSS problem were divided into three recursive subproblems, each
of which were halfsized with linear overhead (the case 3 situation again), then we
have
A
= 3,
B
= 8, and
k
= 1.
For this situation, solution 1 will apply since
A > B
k
,
3 > 2
1
.
Thus, T(N) = O(N
log
B
A
)
= O(N
log
2
3
) = O(N
1.59
).
In this case, the overhead
(the calculations required for case 3) does
not
contribute to the total cost of the
algorithm since O(N
1.59
) > O(N
1
).
This means that
any
overhead smaller than
O(N
1.59
) would give the same running time for the algorithm!
If the original MCSS problem were divided into three recursive subproblems, each
of which were halfsized but required quadratic overhead (
A
= 3,
B
= 2, and
k
= 2),
then solution 3 would apply since
A < B
k
, 3 > 2
2
.
Thus, T(N) = O(N
k
) = O(N
2
).
In
this case, the overhead (the calculations required for case 3) dominates the total
cost of the algorithm, since O(N
2
) > O(N
1.59
).
This means that once the overhead
exceeds the O(N
1.59
) threshold – the overhead becomes the dominating factor in the
running time of the algorithm!
Dynamic Programming
Dynamic programming is a nonrecursive way (typically) of solving the
subproblems of a divide and conquer algorithm via storage of subproblem results
in a table.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '08
 Staff
 Computer Science, css, Dynamic Programming, Recursion, Greedy algorithm, Divide and conquer algorithm, NK

Click to edit the document details