*This preview shows
page 1. Sign up
to
view the full content.*

**Unformatted text preview: **Outline
1 Divide and Conquer Strategy 2 Master Theorem 3 Matrix Multiplication 4 Strassen’s MM Algorithm 5 Complexity of a Problem 6 Selection Problem 7 Summary 8 Computational Geometry c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 1 / 55 Divide and Conquer Strategy Algorithm design is more an art, less so a science. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 2 / 55 Divide and Conquer Strategy Algorithm design is more an art, less so a science.
There are a few useful strategies, but no guarantee to succeed. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 2 / 55 Divide and Conquer Strategy Algorithm design is more an art, less so a science.
There are a few useful strategies, but no guarantee to succeed.
We will discuss: Divide and Conquer, Greedy, Dynamic
Programming. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 2 / 55 Divide and Conquer Strategy Algorithm design is more an art, less so a science.
There are a few useful strategies, but no guarantee to succeed.
We will discuss: Divide and Conquer, Greedy, Dynamic
Programming.
For each of them, we will discuss a few examples, and try to
identify common schemes. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 2 / 55 Divide and Conquer Strategy Algorithm design is more an art, less so a science.
There are a few useful strategies, but no guarantee to succeed.
We will discuss: Divide and Conquer, Greedy, Dynamic
Programming.
For each of them, we will discuss a few examples, and try to
identify common schemes.
Divide and Conquer c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 2 / 55 Divide and Conquer Strategy Algorithm design is more an art, less so a science.
There are a few useful strategies, but no guarantee to succeed.
We will discuss: Divide and Conquer, Greedy, Dynamic
Programming.
For each of them, we will discuss a few examples, and try to
identify common schemes.
Divide and Conquer
Divide the problem into smaller subproblems (of the same type). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 2 / 55 Divide and Conquer Strategy Algorithm design is more an art, less so a science.
There are a few useful strategies, but no guarantee to succeed.
We will discuss: Divide and Conquer, Greedy, Dynamic
Programming.
For each of them, we will discuss a few examples, and try to
identify common schemes.
Divide and Conquer
Divide the problem into smaller subproblems (of the same type).
Solve each subproblem (usually by recursive calls). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 2 / 55 Divide and Conquer Strategy Algorithm design is more an art, less so a science.
There are a few useful strategies, but no guarantee to succeed.
We will discuss: Divide and Conquer, Greedy, Dynamic
Programming.
For each of them, we will discuss a few examples, and try to
identify common schemes.
Divide and Conquer
Divide the problem into smaller subproblems (of the same type).
Solve each subproblem (usually by recursive calls).
Combine the solutions of the subproblems into the solution of the
original problem. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 2 / 55 Merge Sort MergeSort
Input: an array A[1..n]
Output: Sort A into increasing order. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 3 / 55 Merge Sort MergeSort
Input: an array A[1..n]
Output: Sort A into increasing order.
Use a recursive function MergeSort(A, p, r). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 3 / 55 Merge Sort MergeSort
Input: an array A[1..n]
Output: Sort A into increasing order.
Use a recursive function MergeSort(A, p, r).
It sorts A[p..r]. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 3 / 55 Merge Sort MergeSort
Input: an array A[1..n]
Output: Sort A into increasing order.
Use a recursive function MergeSort(A, p, r).
It sorts A[p..r].
In main program, we call MergeSort(A, 1, n). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 3 / 55 Merge Sort
MergeSort(A, p, r)
1: if (p < r) then
2:
q = (p + r)/2
3:
MergeSort(A, p, q)
4:
MergeSort(A, q + 1, r)
5:
Merge(A, p, q, r)
6: else
7:
do nothing
8: end if c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 4 / 55 Merge Sort
MergeSort(A, p, r)
1: if (p < r) then
2:
q = (p + r)/2
3:
MergeSort(A, p, q)
4:
MergeSort(A, q + 1, r)
5:
Merge(A, p, q, r)
6: else
7:
do nothing
8: end if
Divide A[p..r] into two sub-arrays of equal size. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 4 / 55 Merge Sort
MergeSort(A, p, r)
1: if (p < r) then
2:
q = (p + r)/2
3:
MergeSort(A, p, q)
4:
MergeSort(A, q + 1, r)
5:
Merge(A, p, q, r)
6: else
7:
do nothing
8: end if
Divide A[p..r] into two sub-arrays of equal size.
Sort each sub-array by recursive call. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 4 / 55 Merge Sort
MergeSort(A, p, r)
1: if (p < r) then
2:
q = (p + r)/2
3:
MergeSort(A, p, q)
4:
MergeSort(A, q + 1, r)
5:
Merge(A, p, q, r)
6: else
7:
do nothing
8: end if
Divide A[p..r] into two sub-arrays of equal size.
Sort each sub-array by recursive call.
Merge(A, p, q, r) is a procedure that, assuming A[p..q] and
A[q + 1..r] are sorted, merge them into sorted A[p..r]
It can be done in Θ(k) time where k = r − p is the number of
elements to be sorted.
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 4 / 55 Analysis of MergeSort Let T (n) be the runtime function of MergeSort(A[1..n]). Then:
T (n) = c Xin He (University at Buffalo) O(1)
if n = 1
2T (n/2) + Θ(n) if n > 1 CSE 431/531 Algorithm Analysis and Design 5 / 55 Analysis of MergeSort Let T (n) be the runtime function of MergeSort(A[1..n]). Then:
T (n) = O(1)
if n = 1
2T (n/2) + Θ(n) if n > 1 If n = 1, MergeSort does nothing, hence O(1) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 5 / 55 Analysis of MergeSort Let T (n) be the runtime function of MergeSort(A[1..n]). Then:
T (n) = O(1)
if n = 1
2T (n/2) + Θ(n) if n > 1 If n = 1, MergeSort does nothing, hence O(1) time.
Otherwise, we make 2 recursive calls. The input size of each is
n/2. Hence the runtime 2T (n/2). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 5 / 55 Analysis of MergeSort Let T (n) be the runtime function of MergeSort(A[1..n]). Then:
T (n) = O(1)
if n = 1
2T (n/2) + Θ(n) if n > 1 If n = 1, MergeSort does nothing, hence O(1) time.
Otherwise, we make 2 recursive calls. The input size of each is
n/2. Hence the runtime 2T (n/2).
Θ(n) is the time needed by Merge(A, p, q, r) and all other
processing. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 5 / 55 Outline
1 Divide and Conquer Strategy 2 Master Theorem 3 Matrix Multiplication 4 Strassen’s MM Algorithm 5 Complexity of a Problem 6 Selection Problem 7 Summary 8 Computational Geometry c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 6 / 55 Master Theorem
For DaC algorithms, the runtime function often satisﬁes:
T (n) = c Xin He (University at Buffalo) O(1)
if n ≤ n0
aT (n/b) + Θ(f (n)) if n > n0 CSE 431/531 Algorithm Analysis and Design 7 / 55 Master Theorem
For DaC algorithms, the runtime function often satisﬁes:
T (n) = O(1)
if n ≤ n0
aT (n/b) + Θ(f (n)) if n > n0 If n ≤ n0 (n0 is a small constant), we solve the problem directly
without recursive calls. Since the input size is ﬁxed (bounded by
n0 ), it takes O(1) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 7 / 55 Master Theorem
For DaC algorithms, the runtime function often satisﬁes:
T (n) = O(1)
if n ≤ n0
aT (n/b) + Θ(f (n)) if n > n0 If n ≤ n0 (n0 is a small constant), we solve the problem directly
without recursive calls. Since the input size is ﬁxed (bounded by
n0 ), it takes O(1) time.
We make a recursive calls. The input size of each is n/b. Hence
the runtime T (n/b). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 7 / 55 Master Theorem
For DaC algorithms, the runtime function often satisﬁes:
T (n) = O(1)
if n ≤ n0
aT (n/b) + Θ(f (n)) if n > n0 If n ≤ n0 (n0 is a small constant), we solve the problem directly
without recursive calls. Since the input size is ﬁxed (bounded by
n0 ), it takes O(1) time.
We make a recursive calls. The input size of each is n/b. Hence
the runtime T (n/b).
Θ(f (n)) is the time needed by all other processing. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 7 / 55 Master Theorem
For DaC algorithms, the runtime function often satisﬁes:
T (n) = O(1)
if n ≤ n0
aT (n/b) + Θ(f (n)) if n > n0 If n ≤ n0 (n0 is a small constant), we solve the problem directly
without recursive calls. Since the input size is ﬁxed (bounded by
n0 ), it takes O(1) time.
We make a recursive calls. The input size of each is n/b. Hence
the runtime T (n/b).
Θ(f (n)) is the time needed by all other processing.
T (n) =? c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 7 / 55 Master Theorem Master Theorem (Theorem 4.1, Cormen’s book.)
1 2
3 If f (n) = O(nlogb a− ) for some constant > 0, then
T (n) = Θ(nlogb a ).
If f (n) = Θ(nlogb a ), then T (n) = Θ(nlogb a log n).
If f (n) = Ω(nlogb a+ ) for some constant > 0, and af (n/b) ≤ cf (n)
for some c < 1 for sufﬁciently large n, then T (n) = Θ(f (n)). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 8 / 55 Master Theorem Master Theorem (Theorem 4.1, Cormen’s book.)
1 2
3 If f (n) = O(nlogb a− ) for some constant > 0, then
T (n) = Θ(nlogb a ).
If f (n) = Θ(nlogb a ), then T (n) = Θ(nlogb a log n).
If f (n) = Ω(nlogb a+ ) for some constant > 0, and af (n/b) ≤ cf (n)
for some c < 1 for sufﬁciently large n, then T (n) = Θ(f (n)). Example: MergeSort
We have a = 2, b = 2, hence logb a = log2 2 = 1. So
f (n) = Θ(n1 ) = Θ(nlogb a ). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 8 / 55 Master Theorem Master Theorem (Theorem 4.1, Cormen’s book.)
1 2
3 If f (n) = O(nlogb a− ) for some constant > 0, then
T (n) = Θ(nlogb a ).
If f (n) = Θ(nlogb a ), then T (n) = Θ(nlogb a log n).
If f (n) = Ω(nlogb a+ ) for some constant > 0, and af (n/b) ≤ cf (n)
for some c < 1 for sufﬁciently large n, then T (n) = Θ(f (n)). Example: MergeSort
We have a = 2, b = 2, hence logb a = log2 2 = 1. So
f (n) = Θ(n1 ) = Θ(nlogb a ).
By statement (2), T (n) = Θ(nlog2 2 log n) = Θ(n log n). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 8 / 55 Binary Search
Binary Search
Input: Sorted array A[1..n] and a number x
Output: Find i such that A[i] = x, if no such i exists, output “no”. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 9 / 55 Binary Search
Binary Search
Input: Sorted array A[1..n] and a number x
Output: Find i such that A[i] = x, if no such i exists, output “no”.
We use a rec function BinarySearch(A, p, r, x) that searches x in A[p..r]. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 9 / 55 Binary Search
Binary Search
Input: Sorted array A[1..n] and a number x
Output: Find i such that A[i] = x, if no such i exists, output “no”.
We use a rec function BinarySearch(A, p, r, x) that searches x in A[p..r].
BinarySearch(A, p, r, x)
1: if p = r then
2:
if A[p] = x return p
3:
if A[p] = x return “no”
4: else
5:
q = (p + r)/2
6:
if A[q] = x return q
7:
if A[q] > x call BinarySearch(A, p, q − 1, x)
8:
if A[q] < x call BinarySearch(A, q + 1, r, x)
9: end if
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 9 / 55 Binary Search
Binary Search
Input: Sorted array A[1..n] and a number x
Output: Find i such that A[i] = x, if no such i exists, output “no”.
We use a rec function BinarySearch(A, p, r, x) that searches x in A[p..r].
BinarySearch(A, p, r, x)
1: if p = r then
2:
if A[p] = x return p
3:
if A[p] = x return “no”
4: else
5:
q = (p + r)/2
6:
if A[q] = x return q
7:
if A[q] > x call BinarySearch(A, p, q − 1, x)
8:
if A[q] < x call BinarySearch(A, q + 1, r, x)
9: end if
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 9 / 55 Analysis of Binary Search If n = p − r + 1 = 1, it takes O(1) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 10 / 55 Analysis of Binary Search If n = p − r + 1 = 1, it takes O(1) time.
If not, we make at most one recursive call, with size n/2. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 10 / 55 Analysis of Binary Search If n = p − r + 1 = 1, it takes O(1) time.
If not, we make at most one recursive call, with size n/2.
All other processing take f (n) = Θ(1) time c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 10 / 55 Analysis of Binary Search If n = p − r + 1 = 1, it takes O(1) time.
If not, we make at most one recursive call, with size n/2.
All other processing take f (n) = Θ(1) time
So a = 1, b = 2 and f (n) = Θ(n0 ) time.
Since logb a = log2 1 = 0, f (n) = Θ(nlogb a ). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 10 / 55 Analysis of Binary Search If n = p − r + 1 = 1, it takes O(1) time.
If not, we make at most one recursive call, with size n/2.
All other processing take f (n) = Θ(1) time
So a = 1, b = 2 and f (n) = Θ(n0 ) time.
Since logb a = log2 1 = 0, f (n) = Θ(nlogb a ).
Hence T (n) = Θ(nlogb a log n) = Θ(log n). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 10 / 55 Example
Example
A function makes 4 recursive calls, each with size n/2. Other
processing takes f (n) = Θ(n3 ) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 11 / 55 Example
Example
A function makes 4 recursive calls, each with size n/2. Other
processing takes f (n) = Θ(n3 ) time.
T (n) = 4T (n/2) + Θ(n3 ) c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 11 / 55 Example
Example
A function makes 4 recursive calls, each with size n/2. Other
processing takes f (n) = Θ(n3 ) time.
T (n) = 4T (n/2) + Θ(n3 )
We have a = 4, b = 2. So logb a = log2 4 = 2. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 11 / 55 Example
Example
A function makes 4 recursive calls, each with size n/2. Other
processing takes f (n) = Θ(n3 ) time.
T (n) = 4T (n/2) + Θ(n3 )
We have a = 4, b = 2. So logb a = log2 4 = 2.
f (n) = n3 = Θ(nloga b+1 ) = Ω(nloga b+0.5 ).
This is the case 3 of Master Theorem. We need to check the 2nd
condition: c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 11 / 55 Example
Example
A function makes 4 recursive calls, each with size n/2. Other
processing takes f (n) = Θ(n3 ) time.
T (n) = 4T (n/2) + Θ(n3 )
We have a = 4, b = 2. So logb a = log2 4 = 2.
f (n) = n3 = Θ(nloga b+1 ) = Ω(nloga b+0.5 ).
This is the case 3 of Master Theorem. We need to check the 2nd
condition:
n3 43 1
a · f (n/b) = 4
= n = · f (n)
2
8
2
If we let c = 1/2 < 1, we have: a · f (n/b) ≤ c · f (n).
Hence by case 3, T (n) = Θ(f (n)) = Θ(n3 ).
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 11 / 55 Master Theorem
If f (n) has the form f (n) = Θ(nk ) for some k ≥ 0, We have the following: A simpler version of Master Theorem
T (n) = O(1)
if n ≤ n0
aT (n/b) + Θ(nk ) if n > n0 1 If k < logb a, then T (n) = Θ(nlogb a ). 2 If k = logb a, then T (n) = Θ(nk log n). 3 If k > logb a, then T (n) = Θ(nk ). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 12 / 55 Master Theorem
If f (n) has the form f (n) = Θ(nk ) for some k ≥ 0, We have the following: A simpler version of Master Theorem
T (n) = O(1)
if n ≤ n0
aT (n/b) + Θ(nk ) if n > n0 1 If k < logb a, then T (n) = Θ(nlogb a ). 2 If k = logb a, then T (n) = Θ(nk log n). 3 If k > logb a, then T (n) = Θ(nk ). Only the case 3 is different. In this case, we need to check the 2nd
condition. Because k > logb a, bk > a and a/bk < 1:
a · f (n/b) = a ·
where c = a
bk n
b k = a
· f (n) = c · f (n)
bk < 1, as needed. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 12 / 55 Master Theorem How to understand/memorize Master Theorem? c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 13 / 55 Master Theorem How to understand/memorize Master Theorem?
The cost of a DaC algorithm can be divided into two parts: c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 13 / 55 Master Theorem How to understand/memorize Master Theorem?
The cost of a DaC algorithm can be divided into two parts:
1 The total cost of all recursive calls is Θ(nlogb a ). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 13 / 55 Master Theorem How to understand/memorize Master Theorem?
The cost of a DaC algorithm can be divided into two parts:
1
2 The total cost of all recursive calls is Θ(nlogb a ).
The total cost of all other processing is Θ(f (n)). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 13 / 55 Master Theorem How to understand/memorize Master Theorem?
The cost of a DaC algorithm can be divided into two parts:
1
2 The total cost of all recursive calls is Θ(nlogb a ).
The total cost of all other processing is Θ(f (n)). If (1) > (2), (1) dominates the total cost: T (n) = Θ(nlogb a ). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 13 / 55 Master Theorem How to understand/memorize Master Theorem?
The cost of a DaC algorithm can be divided into two parts:
1
2 The total cost of all recursive calls is Θ(nlogb a ).
The total cost of all other processing is Θ(f (n)). If (1) > (2), (1) dominates the total cost: T (n) = Θ(nlogb a ).
If (1) < (2), (2) dominates the total cost: T (n) = Θ(f (n)). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 13 / 55 Master Theorem How to understand/memorize Master Theorem?
The cost of a DaC algorithm can be divided into two parts:
1
2 The total cost of all recursive calls is Θ(nlogb a ).
The total cost of all other processing is Θ(f (n)). If (1) > (2), (1) dominates the total cost: T (n) = Θ(nlogb a ).
If (1) < (2), (2) dominates the total cost: T (n) = Θ(f (n)).
If (1) = (2), the cost of two parts are about the same, somehow we
have an extra factor log n. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 13 / 55 Master Theorem How to understand/memorize Master Theorem?
The cost of a DaC algorithm can be divided into two parts:
1
2 The total cost of all recursive calls is Θ(nlogb a ).
The total cost of all other processing is Θ(f (n)). If (1) > (2), (1) dominates the total cost: T (n) = Θ(nlogb a ).
If (1) < (2), (2) dominates the total cost: T (n) = Θ(f (n)).
If (1) = (2), the cost of two parts are about the same, somehow we
have an extra factor log n.
The proof of Master Theorem is given in textbook.
We’ll illustrate two examples in class. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 13 / 55 Example
For some simple cases, Master Theorem does not work. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 14 / 55 Example
For some simple cases, Master Theorem does not work. Example
T (n) = 2T (n/2) + Θ(n log n) c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 14 / 55 Example
For some simple cases, Master Theorem does not work. Example
T (n) = 2T (n/2) + Θ(n log n)
a = 2, b = 2, loga b = log2 2 = 1. f (n) = n1 log n = nlogb a log n c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 14 / 55 Example
For some simple cases, Master Theorem does not work. Example
T (n) = 2T (n/2) + Θ(n log n)
a = 2, b = 2, loga b = log2 2 = 1. f (n) = n1 log n = nlogb a log n
f (n) = Ω(n), but f (n) = Ω(n1+ ) for any > 0. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 14 / 55 Example
For some simple cases, Master Theorem does not work. Example
T (n) = 2T (n/2) + Θ(n log n)
a = 2, b = 2, loga b = log2 2 = 1. f (n) = n1 log n = nlogb a log n
f (n) = Ω(n), but f (n) = Ω(n1+ ) for any > 0.
Master Theorem does not apply. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 14 / 55 Example
For some simple cases, Master Theorem does not work. Example
T (n) = 2T (n/2) + Θ(n log n)
a = 2, b = 2, loga b = log2 2 = 1. f (n) = n1 log n = nlogb a log n
f (n) = Ω(n), but f (n) = Ω(n1+ ) for any > 0.
Master Theorem does not apply. Theorem
If T (n) = aT (n/b) + f (n), where f (n) = Θ(nlogb a (log n)k ), then
T (n) = Θ(nlogb a (log n)k+1 ). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 14 / 55 Example
For some simple cases, Master Theorem does not work. Example
T (n) = 2T (n/2) + Θ(n log n)
a = 2, b = 2, loga b = log2 2 = 1. f (n) = n1 log n = nlogb a log n
f (n) = Ω(n), but f (n) = Ω(n1+ ) for any > 0.
Master Theorem does not apply. Theorem
If T (n) = aT (n/b) + f (n), where f (n) = Θ(nlogb a (log n)k ), then
T (n) = Θ(nlogb a (log n)k+1 ).
In the above example, T (n) = Θ(n log2 n)
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 14 / 55 Outline
1 Divide and Conquer Strategy 2 Master Theorem 3 Matrix Multiplication 4 Strassen’s MM Algorithm 5 Complexity of a Problem 6 Selection Problem 7 Summary 8 Computational Geometry c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 15 / 55 Matrix Multiplication
Matrix multiplication is a basic operation in Linear Algebra. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 16 / 55 Matrix Multiplication
Matrix multiplication is a basic operation in Linear Algebra.
Many applications in Science and Engineering. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 16 / 55 Matrix Multiplication
Matrix multiplication is a basic operation in Linear Algebra.
Many applications in Science and Engineering.
Let A = (aij )1≤i,j≤n and B = (bij )1≤i,j≤n be two n × n matrices. Deﬁnition
Matrix Addition
C = (cij )1≤i,j≤n = A + B
is deﬁned by cij = aij + bij for 1 ≤ i, j ≤ n. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 16 / 55 Matrix Multiplication
Matrix multiplication is a basic operation in Linear Algebra.
Many applications in Science and Engineering.
Let A = (aij )1≤i,j≤n and B = (bij )1≤i,j≤n be two n × n matrices. Deﬁnition
Matrix Addition
C = (cij )1≤i,j≤n = A + B
is deﬁned by cij = aij + bij for 1 ≤ i, j ≤ n.
We need to calculate n2 entries in C. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 16 / 55 Matrix Multiplication
Matrix multiplication is a basic operation in Linear Algebra.
Many applications in Science and Engineering.
Let A = (aij )1≤i,j≤n and B = (bij )1≤i,j≤n be two n × n matrices. Deﬁnition
Matrix Addition
C = (cij )1≤i,j≤n = A + B
is deﬁned by cij = aij + bij for 1 ≤ i, j ≤ n.
We need to calculate n2 entries in C.
Each entry takes O(1) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 16 / 55 Matrix Multiplication
Matrix multiplication is a basic operation in Linear Algebra.
Many applications in Science and Engineering.
Let A = (aij )1≤i,j≤n and B = (bij )1≤i,j≤n be two n × n matrices. Deﬁnition
Matrix Addition
C = (cij )1≤i,j≤n = A + B
is deﬁned by cij = aij + bij for 1 ≤ i, j ≤ n.
We need to calculate n2 entries in C.
Each entry takes O(1) time.
So matrix addition takes Θ(n2 ) time.
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 16 / 55 Matrix Multiplication
Deﬁnition
Matrix Multiplication
C = (cij )1≤i,j≤n = A × B
is deﬁned by: for 1 ≤ i, j ≤ n,
n aik × bkj cij =
k=1 c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 17 / 55 Matrix Multiplication
Deﬁnition
Matrix Multiplication
C = (cij )1≤i,j≤n = A × B
is deﬁned by: for 1 ≤ i, j ≤ n,
n aik × bkj cij =
k=1 Example
A= c Xin He (University at Buffalo) 4
2 −1
1 , B= 3
0 1
−2 CSE 431/531 Algorithm Analysis and Design 17 / 55 Matrix Multiplication
Deﬁnition
Matrix Multiplication
C = (cij )1≤i,j≤n = A × B
is deﬁned by: for 1 ≤ i, j ≤ n,
n aik × bkj cij =
k=1 Example
A= C= 4
2 4 · 3 + (−1) · 0
2·3+1·0 c Xin He (University at Buffalo) −1
1 , B= 3
0 1
−2 4 · 1 + (−1) · (−2)
2 · 1 + 1 · (−2) CSE 431/531 Algorithm Analysis and Design = 12 6
60
17 / 55 Matrix Matrix Multiplication
MatrixMultiply(A, B)
1: for i = 1 to n do
2:
for j = 1 to n do
3:
cij = 0
4:
for k = 1 to n do
5:
cij = cij + aik · bkj
6:
end for
7:
end for
8: end for c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 18 / 55 Matrix Matrix Multiplication
MatrixMultiply(A, B)
1: for i = 1 to n do
2:
for j = 1 to n do
3:
cij = 0
4:
for k = 1 to n do
5:
cij = cij + aik · bkj
6:
end for
7:
end for
8: end for
This algorithm clearly takes Θ(n3 ) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 18 / 55 Matrix Matrix Multiplication
MatrixMultiply(A, B)
1: for i = 1 to n do
2:
for j = 1 to n do
3:
cij = 0
4:
for k = 1 to n do
5:
cij = cij + aik · bkj
6:
end for
7:
end for
8: end for
This algorithm clearly takes Θ(n3 ) time.
Since MM is an important operation, can we do better than this? c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 18 / 55 Outline
1 Divide and Conquer Strategy 2 Master Theorem 3 Matrix Multiplication 4 Strassen’s MM Algorithm 5 Complexity of a Problem 6 Selection Problem 7 Summary 8 Computational Geometry c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 19 / 55 Strassen’s MM Algorithm Try DaC. Assume n = 2k is a power of 2. If not, we can pad A and
B by extra 0’s so that this is true. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 20 / 55 Strassen’s MM Algorithm Try DaC. Assume n = 2k is a power of 2. If not, we can pad A and
B by extra 0’s so that this is true.
Divide each A, B, C into 4 sub-matrices, with size n/2 × n/2. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 20 / 55 Strassen’s MM Algorithm Try DaC. Assume n = 2k is a power of 2. If not, we can pad A and
B by extra 0’s so that this is true.
Divide each A, B, C into 4 sub-matrices, with size n/2 × n/2.
A= ab
cd c Xin He (University at Buffalo) B= ef
gh C= CSE 431/531 Algorithm Analysis and Design rs
tu 20 / 55 Strassen’s MM Algorithm Try DaC. Assume n = 2k is a power of 2. If not, we can pad A and
B by extra 0’s so that this is true.
Divide each A, B, C into 4 sub-matrices, with size n/2 × n/2.
A= ab
cd B= ef
gh C= rs
tu It can be shown:
r = a×e+b×g
t = c×e+d×g c Xin He (University at Buffalo) s= a×f +b×h
u= c×f +d×h CSE 431/531 Algorithm Analysis and Design 20 / 55 Strassen’s MM Algorithm
If n = 1, solve the problem directly (in O(1) time). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 21 / 55 Strassen’s MM Algorithm
If n = 1, solve the problem directly (in O(1) time).
If not, divide A and B into 4 sub-matrices. (This only involves
manipulation of indices, no actual division is needed. It actually
takes no time). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 21 / 55 Strassen’s MM Algorithm
If n = 1, solve the problem directly (in O(1) time).
If not, divide A and B into 4 sub-matrices. (This only involves
manipulation of indices, no actual division is needed. It actually
takes no time).
Solve sub-problems using recursive calls. Each × is a recursive
call. There are 8 of them. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 21 / 55 Strassen’s MM Algorithm
If n = 1, solve the problem directly (in O(1) time).
If not, divide A and B into 4 sub-matrices. (This only involves
manipulation of indices, no actual division is needed. It actually
takes no time).
Solve sub-problems using recursive calls. Each × is a recursive
call. There are 8 of them.
Use the above formulas to obtain A × B. This involves 4 matrix
additions of size n/2 × n/2. It takes Θ(n2 ) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 21 / 55 Strassen’s MM Algorithm
If n = 1, solve the problem directly (in O(1) time).
If not, divide A and B into 4 sub-matrices. (This only involves
manipulation of indices, no actual division is needed. It actually
takes no time).
Solve sub-problems using recursive calls. Each × is a recursive
call. There are 8 of them.
Use the above formulas to obtain A × B. This involves 4 matrix
additions of size n/2 × n/2. It takes Θ(n2 ) time.
T (n) = 8T (n/2) + Θ(n2 )
Thus: a = 8, b = 2 and k = 2. Since logb a = log2 8 = 3 > 2, we get:
T (n) = Θ(nlog2 8 ) = Θ(n3 ). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 21 / 55 Strassen’s MM Algorithm
If n = 1, solve the problem directly (in O(1) time).
If not, divide A and B into 4 sub-matrices. (This only involves
manipulation of indices, no actual division is needed. It actually
takes no time).
Solve sub-problems using recursive calls. Each × is a recursive
call. There are 8 of them.
Use the above formulas to obtain A × B. This involves 4 matrix
additions of size n/2 × n/2. It takes Θ(n2 ) time.
T (n) = 8T (n/2) + Θ(n2 )
Thus: a = 8, b = 2 and k = 2. Since logb a = log2 8 = 3 > 2, we get:
T (n) = Θ(nlog2 8 ) = Θ(n3 ).
No better than the simple Θ(n3 ) algorithm.
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 21 / 55 Strassen’s MM Algorithm
The problem: We are making too many recursive calls! c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 22 / 55 Strassen’s MM Algorithm
The problem: We are making too many recursive calls!
To improve, we must reduce the number of recursive calls. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 22 / 55 Strassen’s MM Algorithm
The problem: We are making too many recursive calls!
To improve, we must reduce the number of recursive calls.
A1 =
A2 =
A3 =
A4 =
A5 =
A6 =
A7 =
r=
s=
t=
u= a
a+b
c+d
d
a+d
b−d
a−c
P5 + P4 − P2 + P6
P1 + P2
P3 + P4
P5 + P1 − P3 − P7 c Xin He (University at Buffalo) B1
B2
B3
B4
B5
B6
B7 =
=
=
=
=
=
= f −h
h
e
g−e
e+h
g+h
e+f CSE 431/531 Algorithm Analysis and Design P1
P2
P3
P4
P5
P6
P7 =
=
=
=
=
=
= A1 × B1
A2 × B2
A3 × B3
A4 × B4
A5 × B5
A6 × B6
A7 × B7 22 / 55 Strassen’s MM Algorithm
The problem: We are making too many recursive calls!
To improve, we must reduce the number of recursive calls.
A1 =
A2 =
A3 =
A4 =
A5 =
A6 =
A7 =
r=
s=
t=
u= a
a+b
c+d
d
a+d
b−d
a−c
P5 + P4 − P2 + P6
P1 + P2
P3 + P4
P5 + P1 − P3 − P7 B1
B2
B3
B4
B5
B6
B7 =
=
=
=
=
=
= f −h
h
e
g−e
e+h
g+h
e+f P1
P2
P3
P4
P5
P6
P7 =
=
=
=
=
=
= A1 × B1
A2 × B2
A3 × B3
A4 × B4
A5 × B5
A6 × B6
A7 × B7 We need 7 recursive calls, and a total 18 additions/subtractions of
n/2 × n/2 sub-matrices.
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 22 / 55 Strassen’s MM Algorithm
T (n) = 7T (n/2) + Θ(n2 )
a = 7, b = 2, k = 2. So loga b = log2 7 ≈ 2.81 > k. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 23 / 55 Strassen’s MM Algorithm
T (n) = 7T (n/2) + Θ(n2 )
a = 7, b = 2, k = 2. So loga b = log2 7 ≈ 2.81 > k.
Hence T (n) = Θ(n2.81 ). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 23 / 55 Strassen’s MM Algorithm
T (n) = 7T (n/2) + Θ(n2 )
a = 7, b = 2, k = 2. So loga b = log2 7 ≈ 2.81 > k.
Hence T (n) = Θ(n2.81 ).
For small n, the simple Θ(n3 ) algorithm is better. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 23 / 55 Strassen’s MM Algorithm
T (n) = 7T (n/2) + Θ(n2 )
a = 7, b = 2, k = 2. So loga b = log2 7 ≈ 2.81 > k.
Hence T (n) = Θ(n2.81 ).
For small n, the simple Θ(n3 ) algorithm is better.
For larger n, Strassen’s Θ(n2.81 ) algorithm is better. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 23 / 55 Strassen’s MM Algorithm
T (n) = 7T (n/2) + Θ(n2 )
a = 7, b = 2, k = 2. So loga b = log2 7 ≈ 2.81 > k.
Hence T (n) = Θ(n2.81 ).
For small n, the simple Θ(n3 ) algorithm is better.
For larger n, Strassen’s Θ(n2.81 ) algorithm is better.
The break-even value is 20 ≤ n ≤ 50, depending on
implementation. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 23 / 55 Strassen’s MM Algorithm
T (n) = 7T (n/2) + Θ(n2 )
a = 7, b = 2, k = 2. So loga b = log2 7 ≈ 2.81 > k.
Hence T (n) = Θ(n2.81 ).
For small n, the simple Θ(n3 ) algorithm is better.
For larger n, Strassen’s Θ(n2.81 ) algorithm is better.
The break-even value is 20 ≤ n ≤ 50, depending on
implementation.
In some Science/Engineering applications, the matrices in MM are
sparse (namely most entries are 0.) In such cases, neither the
simple, nor the Strassen’s algorithm work well. Completely
different algorithms have been designed.
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 23 / 55 Outline
1 Divide and Conquer Strategy 2 Master Theorem 3 Matrix Multiplication 4 Strassen’s MM Algorithm 5 Complexity of a Problem 6 Selection Problem 7 Summary 8 Computational Geometry c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 24 / 55 Complexity of a Problem
Complexity of a Problem
The Complexity of an algorithm is the growth rate of its runtime function. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 25 / 55 Complexity of a Problem
Complexity of a Problem
The Complexity of an algorithm is the growth rate of its runtime function.
The Complexity of a problem P is the complexity of the best algorithm
(known or unknown) for solving it. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 25 / 55 Complexity of a Problem
Complexity of a Problem
The Complexity of an algorithm is the growth rate of its runtime function.
The Complexity of a problem P is the complexity of the best algorithm
(known or unknown) for solving it.
The complexity CP (n) of P is the most important computational property
of P. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 25 / 55 Complexity of a Problem
Complexity of a Problem
The Complexity of an algorithm is the growth rate of its runtime function.
The Complexity of a problem P is the complexity of the best algorithm
(known or unknown) for solving it.
The complexity CP (n) of P is the most important computational property
of P.
If we have an algorithm for solving P with runtime T (n), then T (n) is an
upper bound of CP (n). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 25 / 55 Complexity of a Problem
Complexity of a Problem
The Complexity of an algorithm is the growth rate of its runtime function.
The Complexity of a problem P is the complexity of the best algorithm
(known or unknown) for solving it.
The complexity CP (n) of P is the most important computational property
of P.
If we have an algorithm for solving P with runtime T (n), then T (n) is an
upper bound of CP (n).
To determine CP (n), we need to ﬁnd a lower bound SP (n): any algorithm
(known or unknown) for solving P must have runtime at least Ω(SP (n)). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 25 / 55 Complexity of a Problem
Complexity of a Problem
The Complexity of an algorithm is the growth rate of its runtime function.
The Complexity of a problem P is the complexity of the best algorithm
(known or unknown) for solving it.
The complexity CP (n) of P is the most important computational property
of P.
If we have an algorithm for solving P with runtime T (n), then T (n) is an
upper bound of CP (n).
To determine CP (n), we need to ﬁnd a lower bound SP (n): any algorithm
(known or unknown) for solving P must have runtime at least Ω(SP (n)).
If T (n) = Θ(SP (n)), then CP (n) = Θ(T (n)). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 25 / 55 Complexity of a Problem
Complexity of a Problem
The Complexity of an algorithm is the growth rate of its runtime function.
The Complexity of a problem P is the complexity of the best algorithm
(known or unknown) for solving it.
The complexity CP (n) of P is the most important computational property
of P.
If we have an algorithm for solving P with runtime T (n), then T (n) is an
upper bound of CP (n).
To determine CP (n), we need to ﬁnd a lower bound SP (n): any algorithm
(known or unknown) for solving P must have runtime at least Ω(SP (n)).
If T (n) = Θ(SP (n)), then CP (n) = Θ(T (n)).
In most cases, this is extremely hard to do. (How do we determine the
runtime function of inﬁnitely many possible algorithms for solving P?) c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 25 / 55 Complexity of a Problem
Complexity of a Problem
The Complexity of an algorithm is the growth rate of its runtime function.
The Complexity of a problem P is the complexity of the best algorithm
(known or unknown) for solving it.
The complexity CP (n) of P is the most important computational property
of P.
If we have an algorithm for solving P with runtime T (n), then T (n) is an
upper bound of CP (n).
To determine CP (n), we need to ﬁnd a lower bound SP (n): any algorithm
(known or unknown) for solving P must have runtime at least Ω(SP (n)).
If T (n) = Θ(SP (n)), then CP (n) = Θ(T (n)).
In most cases, this is extremely hard to do. (How do we determine the
runtime function of inﬁnitely many possible algorithms for solving P?)
Or, in a few cases, it’s trivial.
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 25 / 55 Complexity of a Problem Example
Matrix Addition (MA)
We have a simple algorithm for MA with runtime Θ(n2 ). So O(n2 )
is an bound for CMA (n). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 26 / 55 Complexity of a Problem Example
Matrix Addition (MA)
We have a simple algorithm for MA with runtime Θ(n2 ). So O(n2 )
is an bound for CMA (n).
Θ(n2 ) is also a lower bound for CMA (n): any algorithm for solving
MA must at least write down the resulting matrix C, doing this
alone requires Ω(n2 ) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 26 / 55 Complexity of a Problem Example
Matrix Addition (MA)
We have a simple algorithm for MA with runtime Θ(n2 ). So O(n2 )
is an bound for CMA (n).
Θ(n2 ) is also a lower bound for CMA (n): any algorithm for solving
MA must at least write down the resulting matrix C, doing this
alone requires Ω(n2 ) time.
Since the lower and upper bounds are the same, we get
CMA (n) = Θ(n2 ). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 26 / 55 Complexity of a Problem Sorting (general purpose)
Given an array A[1..n] of elements, sort A. (The only operations
allowed for A: comparison between array elements).
MergeSort gives an upper bound: Csort (n) = O(n log n). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 27 / 55 Complexity of a Problem Sorting (general purpose)
Given an array A[1..n] of elements, sort A. (The only operations
allowed for A: comparison between array elements).
MergeSort gives an upper bound: Csort (n) = O(n log n).
The Lower Bound Theorem: Any comparison based sorting
algorithm must make at least Ω(n log n) comparisons, and hence
take at least Ω(n log n) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 27 / 55 Complexity of a Problem Sorting (general purpose)
Given an array A[1..n] of elements, sort A. (The only operations
allowed for A: comparison between array elements).
MergeSort gives an upper bound: Csort (n) = O(n log n).
The Lower Bound Theorem: Any comparison based sorting
algorithm must make at least Ω(n log n) comparisons, and hence
take at least Ω(n log n) time.
Since the lower and upper bounds are the same,
Csort (n) = Θ(n log n) c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 27 / 55 Complexity of a Problem
Example
Matrix Multiplication (MM)
Strassen’s algorithm gives an upper bound: CMM (n) = O(n2.81 ). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 28 / 55 Complexity of a Problem
Example
Matrix Multiplication (MM)
Strassen’s algorithm gives an upper bound: CMM (n) = O(n2.81 ).
Currently, the best known upper bound for MM is
CMM (n) = O(n2.376 ). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 28 / 55 Complexity of a Problem
Example
Matrix Multiplication (MM)
Strassen’s algorithm gives an upper bound: CMM (n) = O(n2.81 ).
Currently, the best known upper bound for MM is
CMM (n) = O(n2.376 ).
A trivial lower bound: Any MM algorithm must write down the
resulting matrix C, this alone requires at least Ω(n2 ) time. Thus
CMM (n) = Ω(n2 ).
If CMM (n) = Θ(nα ), we know 2 ≤ α ≤ 2.376. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 28 / 55 Complexity of a Problem
Example
Matrix Multiplication (MM)
Strassen’s algorithm gives an upper bound: CMM (n) = O(n2.81 ).
Currently, the best known upper bound for MM is
CMM (n) = O(n2.376 ).
A trivial lower bound: Any MM algorithm must write down the
resulting matrix C, this alone requires at least Ω(n2 ) time. Thus
CMM (n) = Ω(n2 ).
If CMM (n) = Θ(nα ), we know 2 ≤ α ≤ 2.376.
Determining the exact value of α is a long-standing open problem
in CS/Math. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 28 / 55 Outline
1 Divide and Conquer Strategy 2 Master Theorem 3 Matrix Multiplication 4 Strassen’s MM Algorithm 5 Complexity of a Problem 6 Selection Problem 7 Summary 8 Computational Geometry c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 29 / 55 Selection Problem
Three basic problems for ordered sets: c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 30 / 55 Selection Problem
Three basic problems for ordered sets:
Sorting. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 30 / 55 Selection Problem
Three basic problems for ordered sets:
Sorting.
Searching: Given an array A[1..n] and x, ﬁnd i such that A[i] = x. If
no such i exists, report “no”. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 30 / 55 Selection Problem
Three basic problems for ordered sets:
Sorting.
Searching: Given an array A[1..n] and x, ﬁnd i such that A[i] = x. If
no such i exists, report “no”.
Selection: Given an unsorted array A[1..n] and integer k
(1 ≤ k ≤ n), return the kth smallest element in A. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 30 / 55 Selection Problem
Three basic problems for ordered sets:
Sorting.
Searching: Given an array A[1..n] and x, ﬁnd i such that A[i] = x. If
no such i exists, report “no”.
Selection: Given an unsorted array A[1..n] and integer k
(1 ≤ k ≤ n), return the kth smallest element in A. Examples
Find the maximum element: Select(A[1..n], n). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 30 / 55 Selection Problem
Three basic problems for ordered sets:
Sorting.
Searching: Given an array A[1..n] and x, ﬁnd i such that A[i] = x. If
no such i exists, report “no”.
Selection: Given an unsorted array A[1..n] and integer k
(1 ≤ k ≤ n), return the kth smallest element in A. Examples
Find the maximum element: Select(A[1..n], n).
Find the minimum element: Select(A[1..n], 1). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 30 / 55 Selection Problem
Three basic problems for ordered sets:
Sorting.
Searching: Given an array A[1..n] and x, ﬁnd i such that A[i] = x. If
no such i exists, report “no”.
Selection: Given an unsorted array A[1..n] and integer k
(1 ≤ k ≤ n), return the kth smallest element in A. Examples
Find the maximum element: Select(A[1..n], n).
Find the minimum element: Select(A[1..n], 1).
Find the median: Select(A[1..n], n/2). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 30 / 55 Selection Problem
What’s the complexity of these three basic problems? c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 31 / 55 Selection Problem
What’s the complexity of these three basic problems?
For Sorting, we already know Csort (n) = Θ(n log n). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 31 / 55 Selection Problem
What’s the complexity of these three basic problems?
For Sorting, we already know Csort (n) = Θ(n log n).
For Searching, there are two versions: c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 31 / 55 Selection Problem
What’s the complexity of these three basic problems?
For Sorting, we already know Csort (n) = Θ(n log n).
For Searching, there are two versions:
A[1..n] is not sorted: c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 31 / 55 Selection Problem
What’s the complexity of these three basic problems?
For Sorting, we already know Csort (n) = Θ(n log n).
For Searching, there are two versions:
A[1..n] is not sorted:
The simple Linear Search takes O(n) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 31 / 55 Selection Problem
What’s the complexity of these three basic problems?
For Sorting, we already know Csort (n) = Θ(n log n).
For Searching, there are two versions:
A[1..n] is not sorted:
The simple Linear Search takes O(n) time.
A trivial lower bound: We must look at every element of A at least
once. (If not, we might miss x). So Cunsorted−search (n) = Ω(n) c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 31 / 55 Selection Problem
What’s the complexity of these three basic problems?
For Sorting, we already know Csort (n) = Θ(n log n).
For Searching, there are two versions:
A[1..n] is not sorted:
The simple Linear Search takes O(n) time.
A trivial lower bound: We must look at every element of A at least
once. (If not, we might miss x). So Cunsorted−search (n) = Ω(n)
Since the lower and upper bounds match we have:
Cunsorted−search (n) = Θ(n) c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 31 / 55 Selection Problem
What’s the complexity of these three basic problems?
For Sorting, we already know Csort (n) = Θ(n log n).
For Searching, there are two versions:
A[1..n] is not sorted:
The simple Linear Search takes O(n) time.
A trivial lower bound: We must look at every element of A at least
once. (If not, we might miss x). So Cunsorted−search (n) = Ω(n)
Since the lower and upper bounds match we have:
Cunsorted−search (n) = Θ(n) A[1..n] is sorted: c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 31 / 55 Selection Problem
What’s the complexity of these three basic problems?
For Sorting, we already know Csort (n) = Θ(n log n).
For Searching, there are two versions:
A[1..n] is not sorted:
The simple Linear Search takes O(n) time.
A trivial lower bound: We must look at every element of A at least
once. (If not, we might miss x). So Cunsorted−search (n) = Ω(n)
Since the lower and upper bounds match we have:
Cunsorted−search (n) = Θ(n) A[1..n] is sorted:
The simple Binary Search takes O(log n) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 31 / 55 Selection Problem
What’s the complexity of these three basic problems?
For Sorting, we already know Csort (n) = Θ(n log n).
For Searching, there are two versions:
A[1..n] is not sorted:
The simple Linear Search takes O(n) time.
A trivial lower bound: We must look at every element of A at least
once. (If not, we might miss x). So Cunsorted−search (n) = Ω(n)
Since the lower and upper bounds match we have:
Cunsorted−search (n) = Θ(n) A[1..n] is sorted:
The simple Binary Search takes O(log n) time.
It can be shown Ω(log n) is also a lower bound. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 31 / 55 Selection Problem
What’s the complexity of these three basic problems?
For Sorting, we already know Csort (n) = Θ(n log n).
For Searching, there are two versions:
A[1..n] is not sorted:
The simple Linear Search takes O(n) time.
A trivial lower bound: We must look at every element of A at least
once. (If not, we might miss x). So Cunsorted−search (n) = Ω(n)
Since the lower and upper bounds match we have:
Cunsorted−search (n) = Θ(n) A[1..n] is sorted:
The simple Binary Search takes O(log n) time.
It can be shown Ω(log n) is also a lower bound.
Since the lower and upper bounds match, we have:
Csorted−search (n) = Θ(log n) c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 31 / 55 Selection Problem
What about Selection? c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 32 / 55 Selection Problem
What about Selection?
Simple-Select(A[1..n], k)
1: MergeSort(A[1..n])
2: output A[k] c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 32 / 55 Selection Problem
What about Selection?
Simple-Select(A[1..n], k)
1: MergeSort(A[1..n])
2: output A[k]
This algorithm solves the select problem in Θ(n log n) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 32 / 55 Selection Problem
What about Selection?
Simple-Select(A[1..n], k)
1: MergeSort(A[1..n])
2: output A[k]
This algorithm solves the select problem in Θ(n log n) time.
But this is an overkill: to solve the select problem, we don’t have to
sort. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 32 / 55 Selection Problem
What about Selection?
Simple-Select(A[1..n], k)
1: MergeSort(A[1..n])
2: output A[k]
This algorithm solves the select problem in Θ(n log n) time.
But this is an overkill: to solve the select problem, we don’t have to
sort.
We will present a Linear Time Select algorithm. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 32 / 55 Selection Problem
What about Selection?
Simple-Select(A[1..n], k)
1: MergeSort(A[1..n])
2: output A[k]
This algorithm solves the select problem in Θ(n log n) time.
But this is an overkill: to solve the select problem, we don’t have to
sort.
We will present a Linear Time Select algorithm.
Ω(n) is a trivial lower bound for Select: we must look at each array
element at least once, otherwise the answer could be wrong. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 32 / 55 Selection Problem
What about Selection?
Simple-Select(A[1..n], k)
1: MergeSort(A[1..n])
2: output A[k]
This algorithm solves the select problem in Θ(n log n) time.
But this is an overkill: to solve the select problem, we don’t have to
sort.
We will present a Linear Time Select algorithm.
Ω(n) is a trivial lower bound for Select: we must look at each array
element at least once, otherwise the answer could be wrong.
This would give: CSelect (n) = Θ(n). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 32 / 55 Selection Problem This algorithm uses DaC. We need a function partition(A,p,r).
The goal: rearrange A[p..r] so that for some q (p ≤ q ≤ r),
A[i] ≤ A[q] ∀i = p, . . . , q − 1
A[q] ≤ A[j] ∀j = q + 1, . . . , r c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 33 / 55 Selection Problem This algorithm uses DaC. We need a function partition(A,p,r).
The goal: rearrange A[p..r] so that for some q (p ≤ q ≤ r),
A[i] ≤ A[q] ∀i = p, . . . , q − 1
A[q] ≤ A[j] ∀j = q + 1, . . . , r q−1 p
≤ A[q] c Xin He (University at Buffalo) q
A[q] q+1 r
≥ A[q] CSE 431/531 Algorithm Analysis and Design 33 / 55 Partition The following code partitions A[p..r] around A[r]
Partition(A, p, r)
1: x ← A[r] (x is “pivot”.)
2: i ← p − 1
3: for j ← p to r − 1 do
4:
if A[j] ≤ x then
5:
i←i+1
6:
swap A[i] and A[j]
7:
end if
8: end for
9: swap A[i + 1] and A[r]
10: return i + 1 c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 34 / 55 Example: (x = 4 is the pivot element.)
i
... c Xin He (University at Buffalo) p,j
3 1 8 5 6 2 7 CSE 431/531 Algorithm Analysis and Design r
4 ... 35 / 55 Example: (x = 4 is the pivot element.)
... i p,j
3 1 8 5 6 2 7 r
4 ... ... p,i
3 j
1 8 5 6 2 7 r
4 ... c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 35 / 55 Example: (x = 4 is the pivot element.)
... i p,j
3 1 8 5 6 2 7 r
4 ... ... p,i
3 j
1 8 5 6 2 7 r
4 ... ... p
3 i
1 j
8 7 r
4 ... c Xin He (University at Buffalo) 5 6 2 CSE 431/531 Algorithm Analysis and Design 35 / 55 Example: (x = 4 is the pivot element.)
... i p,j
3 1 8 5 6 2 7 r
4 ... ... p,i
3 j
1 8 5 6 2 7 r
4 ... ... p
3 i
1 j
8 5 7 r
4 ... ... p
3 i
1 8 j
5 7 r
4 ... c Xin He (University at Buffalo) 6
6 2
2 CSE 431/531 Algorithm Analysis and Design 35 / 55 Example: (x = 4 is the pivot element.)
... i p,j
3 1 8 5 6 2 7 r
4 ... ... p,i
3 j
1 8 5 6 2 7 r
4 ... ... p
3 i
1 j
8 5 7 r
4 ... ... p
3 i
1 8 j
5 7 r
4 ... c Xin He (University at Buffalo) 6
6 2
2 CSE 431/531 Algorithm Analysis and Design 35 / 55 Example: (x = 4 is the pivot element.)
... i p,j
3 1 8 5 6 2 7 r
4 ... ... p,i
3 j
1 8 5 6 2 7 r
4 ... ... p
3 i
1 j
8 5 7 r
4 ... ... p
3 i
1 8 j
5 6 2 7 r
4 ... ... p
3 i
1 8 5 j
6 2 7 r
4 ... c Xin He (University at Buffalo) 6 2 CSE 431/531 Algorithm Analysis and Design 35 / 55 Example: (x = 4 is the pivot element.)
... i p,j
3 1 8 5 6 2 7 r
4 ... ... p,i
3 j
1 8 5 6 2 7 r
4 ... ... p
3 i
1 j
8 5 7 r
4 ... ... p
3 i
1 8 j
5 6 2 7 r
4 ... ... p
3 i
1 8 5 j
6 2 7 r
4 ... ... p
3 i
1 8 5 6 j
2 7 r
4 ... c Xin He (University at Buffalo) 6 2 CSE 431/531 Algorithm Analysis and Design 35 / 55 Example: (x = 4 is the pivot element.)
... i p,j
3 1 8 5 6 2 7 r
4 ... ... p,i
3 j
1 8 5 6 2 7 r
4 ... ... p
3 i
1 j
8 5 7 r
4 ... ... p
3 i
1 8 j
5 6 2 7 r
4 ... ... p
3 i
1 8 5 j
6 2 7 r
4 ... ... p
3 i
1 8 5 6 j
2 7 r
4 ... ... 3 1 2 5 6 8 7 4 ... c Xin He (University at Buffalo) 6 2 CSE 431/531 Algorithm Analysis and Design 35 / 55 Example: (x = 4 is the pivot element.)
... i p,j
3 1 8 5 6 2 7 r
4 ... ... p,i
3 j
1 8 5 6 2 7 r
4 ... ... p
3 i
1 j
8 5 7 r
4 ... ... p
3 i
1 8 j
5 6 2 7 r
4 ... ... p
3 i
1 8 5 j
6 2 7 r
4 ... ... p
3 i
1 8 5 6 j
2 7 r
4 ... ... 3 1 2 5 6 8 7 4 ... ... 3 1 2 5 6 8 7 4 ... c Xin He (University at Buffalo) 6 2 CSE 431/531 Algorithm Analysis and Design 35 / 55 Example: (x = 4 is the pivot element.)
... i p,j
3 1 8 5 6 2 7 r
4 ... ... p,i
3 j
1 8 5 6 2 7 r
4 ... ... p
3 i
1 j
8 5 7 r
4 ... ... p
3 i
1 8 j
5 6 2 7 r
4 ... ... p
3 i
1 8 5 j
6 2 7 r
4 ... ... p
3 i
1 8 5 6 j
2 7 r
4 ... ... 3 1 2 5 6 8 7 4 ... ... 3 1 2 5 6 8 7 4 ... ... 3 1 2 4 6 8 7 5 ... c Xin He (University at Buffalo) 6 2 CSE 431/531 Algorithm Analysis and Design 35 / 55 Partition
We show Partition(A, p, r) achieves the goal. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 36 / 55 Partition
We show Partition(A, p, r) achieves the goal. Before the loop 3-8 is
entered, the following is true for any index k:
1 If k ∈ [p, i], then A[k] ≤ x 2 If k ∈ [i + 1, j − 1], then A[k] > x 3 If k = r, then A[k] = x 4 If k ∈ [j, r − 1], then A[k] is unrestricted.
p i
≤x c Xin He (University at Buffalo) i+1
>x j−1 j
unrestricted CSE 431/531 Algorithm Analysis and Design r−1 r
x 36 / 55 Partition
We show Partition(A, p, r) achieves the goal. Before the loop 3-8 is
entered, the following is true for any index k:
1 If k ∈ [p, i], then A[k] ≤ x 2 If k ∈ [i + 1, j − 1], then A[k] > x 3 If k = r, then A[k] = x 4 If k ∈ [j, r − 1], then A[k] is unrestricted.
p i
≤x i+1
>x j−1 j
unrestricted r−1 r
x Before the 1st iteration, i = p − 1, j = p.
[p, i] = [p, p − 1] = ∅, condition (1) is trivially true.
[i + 1, j − 1] = [p, p − 1] = ∅, condition (2) is trivially true.
condition (3) and (4) are trivially true.
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 36 / 55 Partition
Case (a) A[j] > x: Before:
p i
≤x c Xin He (University at Buffalo) i+1
>x ....
>x j
>x r−1
unrestricted CSE 431/531 Algorithm Analysis and Design r
x 37 / 55 Partition
Case (a) A[j] > x: Before:
p i
≤x r−1 i+1
>x ....
>x j
>x i+1
>x ....
>x >x j
unrestricted r
x r−1 r
x unrestricted After:
p i
≤x c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 37 / 55 Partition
Case (a) A[j] > x: Before:
p i
≤x r−1 i+1
>x ....
>x j
>x i+1
>x ....
>x >x j
unrestricted r
x r−1 r
x unrestricted After:
p i
≤x Case (b) A[j] ≤ x: Before:
p i
≤x c Xin He (University at Buffalo) i+1
a>x ....
>x j
b≤x r−1
unrestricted CSE 431/531 Algorithm Analysis and Design r
x 37 / 55 Partition
Case (a) A[j] > x: Before:
p i
≤x r−1 i+1
>x ....
>x j
>x i+1
>x ....
>x >x j
unrestricted r
x r−1 r
x unrestricted After:
p i
≤x Case (b) A[j] ≤ x: Before:
r−1 ≤x i+1
a>x ....
>x j
b≤x ≤x i
b≤x ....
>x a>x j
unrestricted r
x r−1 r
x unrestricted p i After:
p c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 37 / 55 Partition
After the loop termination j = r − 1:
p i
≤x c Xin He (University at Buffalo) i+1
a>x ··· j=r−1
>x CSE 431/531 Algorithm Analysis and Design r
x 38 / 55 Partition
After the loop termination j = r − 1:
p i
≤x i+1
a>x ··· j=r−1
>x r
x After ﬁnal swap:
p i
≤x c Xin He (University at Buffalo) i+1
x ··· j=r−1
>x CSE 431/531 Algorithm Analysis and Design r
a>x 38 / 55 Partition
After the loop termination j = r − 1:
p i
≤x i+1
a>x ··· j=r−1
>x r
x After ﬁnal swap:
p i
≤x i+1
x ··· j=r−1
>x r
a>x This is what we want. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 38 / 55 Partition
After the loop termination j = r − 1:
p i
≤x i+1
a>x ··· j=r−1
>x r
x After ﬁnal swap:
p i
≤x i+1
x ··· j=r−1
>x r
a>x This is what we want.
It is easy to see Partition(A, p.r) takes Θ(n) time where n = r − p + 1 is
the number of elements in A[p..r]. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 38 / 55 Select
The following algorithm returns the ith smallest element in A[p..r]. (It requires
1 ≤ i ≤ r − p + 1). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 39 / 55 Select
The following algorithm returns the ith smallest element in A[p..r]. (It requires
1 ≤ i ≤ r − p + 1).
Select(A, p, r, i)
1: if (p = r), return A[p] (in this case we must have i = r − p + 1 = 1).
2: x = A[r]
3: swap(A[r], A[r])
4: q = Partition(A, p, r)
5: k = q − p + 1
6: if i = k, return A[q]
7: if i < k, return Select(A, p, q − 1, i)
8: if i > k, return Select(A, q + 1, r, i − k) c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 39 / 55 Select
The following algorithm returns the ith smallest element in A[p..r]. (It requires
1 ≤ i ≤ r − p + 1).
Select(A, p, r, i)
1: if (p = r), return A[p] (in this case we must have i = r − p + 1 = 1).
2: x = A[r]
3: swap(A[r], A[r])
4: q = Partition(A, p, r)
5: k = q − p + 1
6: if i = k, return A[q]
7: if i < k, return Select(A, p, q − 1, i)
8: if i > k, return Select(A, q + 1, r, i − k)
Note: lines (2) and (3) don’t do anything here. We will modify it later. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 39 / 55 Select
The following algorithm returns the ith smallest element in A[p..r]. (It requires
1 ≤ i ≤ r − p + 1).
Select(A, p, r, i)
1: if (p = r), return A[p] (in this case we must have i = r − p + 1 = 1).
2: x = A[r]
3: swap(A[r], A[r])
4: q = Partition(A, p, r)
5: k = q − p + 1
6: if i = k, return A[q]
7: if i < k, return Select(A, p, q − 1, i)
8: if i > k, return Select(A, q + 1, r, i − k)
Note: lines (2) and (3) don’t do anything here. We will modify it later.
p
← q − p elements c Xin He (University at Buffalo) q−1
→ q
A[q] q+1
← n − k elements CSE 431/531 Algorithm Analysis and Design r
→ 39 / 55 Select
If we pick any element x ∈ A[p..r], the algorithm will work correctly. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 40 / 55 Select
If we pick any element x ∈ A[p..r], the algorithm will work correctly.
It can be shown the expected or average runtime is Θ(n). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 40 / 55 Select
If we pick any element x ∈ A[p..r], the algorithm will work correctly.
It can be shown the expected or average runtime is Θ(n).
However, in the worst case, the runtime is Θ(n2 ). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 40 / 55 Select
If we pick any element x ∈ A[p..r], the algorithm will work correctly.
It can be shown the expected or average runtime is Θ(n).
However, in the worst case, the runtime is Θ(n2 ). Worst case example:
A[1..n] is already sorted and we try to ﬁnd the smallest element. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 40 / 55 Select
If we pick any element x ∈ A[p..r], the algorithm will work correctly.
It can be shown the expected or average runtime is Θ(n).
However, in the worst case, the runtime is Θ(n2 ). Worst case example:
A[1..n] is already sorted and we try to ﬁnd the smallest element.
Select(A[1..n], 1) calls Partition(A[1..n]) which returns q = n. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 40 / 55 Select
If we pick any element x ∈ A[p..r], the algorithm will work correctly.
It can be shown the expected or average runtime is Θ(n).
However, in the worst case, the runtime is Θ(n2 ). Worst case example:
A[1..n] is already sorted and we try to ﬁnd the smallest element.
Select(A[1..n], 1) calls Partition(A[1..n]) which returns q = n.
Select(A[1..n − 1], 1) calls Partition(A[1..n − 1]) which returns q = n − 1. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 40 / 55 Select
If we pick any element x ∈ A[p..r], the algorithm will work correctly.
It can be shown the expected or average runtime is Θ(n).
However, in the worst case, the runtime is Θ(n2 ). Worst case example:
A[1..n] is already sorted and we try to ﬁnd the smallest element.
Select(A[1..n], 1) calls Partition(A[1..n]) which returns q = n.
Select(A[1..n − 1], 1) calls Partition(A[1..n − 1]) which returns q = n − 1.
Select(A[1..n − 2], 1) calls Partition(A[1..n − 2]) which returns q = n − 2.
.... c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 40 / 55 Select
If we pick any element x ∈ A[p..r], the algorithm will work correctly.
It can be shown the expected or average runtime is Θ(n).
However, in the worst case, the runtime is Θ(n2 ). Worst case example:
A[1..n] is already sorted and we try to ﬁnd the smallest element.
Select(A[1..n], 1) calls Partition(A[1..n]) which returns q = n.
Select(A[1..n − 1], 1) calls Partition(A[1..n − 1]) which returns q = n − 1.
Select(A[1..n − 2], 1) calls Partition(A[1..n − 2]) which returns q = n − 2.
....
The runtime will be Θ(n + (n − 1) + (n − 2) . . . 2 + 1) = Θ(n2 ).
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 40 / 55 Select The problem: the ﬁnal position q of the pivot element x = A[r] can
be anywhere, we have no control on this. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 41 / 55 Select The problem: the ﬁnal position q of the pivot element x = A[r] can
be anywhere, we have no control on this.
If q is close to the beginning or the end of A[p..r], it will be slow. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 41 / 55 Select The problem: the ﬁnal position q of the pivot element x = A[r] can
be anywhere, we have no control on this.
If q is close to the beginning or the end of A[p..r], it will be slow.
If we can pick x so that q is at about the middle of A[p..r], then the
two sub-problems are about equal size, the runtime will be much
better. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 41 / 55 Select The problem: the ﬁnal position q of the pivot element x = A[r] can
be anywhere, we have no control on this.
If q is close to the beginning or the end of A[p..r], it will be slow.
If we can pick x so that q is at about the middle of A[p..r], then the
two sub-problems are about equal size, the runtime will be much
better.
How to do this? c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 41 / 55 Select
Replace the line (2) by the following:
Line 2
n
1: divide A[1..n] into 5 groups, each containing 5 elements (except
the last group which may have < 5 elements). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 42 / 55 Select
Replace the line (2) by the following:
Line 2
n
1: divide A[1..n] into 5 groups, each containing 5 elements (except
the last group which may have < 5 elements).
2: For each group Gi , let xi be the median of the group. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 42 / 55 Select
Replace the line (2) by the following:
Line 2
n
1: divide A[1..n] into 5 groups, each containing 5 elements (except
the last group which may have < 5 elements).
2: For each group Gi , let xi be the median of the group.
3: Let M = x1 , x2 . . . , x n/5 be the collection of these median
elements. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 42 / 55 Select
Replace the line (2) by the following:
Line 2
n
1: divide A[1..n] into 5 groups, each containing 5 elements (except
the last group which may have < 5 elements).
2: For each group Gi , let xi be the median of the group.
3: Let M = x1 , x2 . . . , x n/5 be the collection of these median
elements.
4: recursively call x = Select(M [1.. n/5 ], n/10). (Namely, x is the
median of M ).
5: use x as the pivot element. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 42 / 55 Select
Replace the line (2) by the following:
Line 2
n
1: divide A[1..n] into 5 groups, each containing 5 elements (except
the last group which may have < 5 elements).
2: For each group Gi , let xi be the median of the group.
3: Let M = x1 , x2 . . . , x n/5 be the collection of these median
elements.
4: recursively call x = Select(M [1.. n/5 ], n/10). (Namely, x is the
median of M ).
5: use x as the pivot element.
We will show in class this modiﬁcation will give a Θ(n) time selection
algorithm. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 42 / 55 Select The linear time selection algorithm is complex. The constant hidden in
Θ(n) is large. It’s not a practical algorithm. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 43 / 55 Select The linear time selection algorithm is complex. The constant hidden in
Θ(n) is large. It’s not a practical algorithm. The signiﬁcance: c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 43 / 55 Select The linear time selection algorithm is complex. The constant hidden in
Θ(n) is large. It’s not a practical algorithm. The signiﬁcance:
It settled the complexity issue of a fundamental problem:
Cselection = Θ(n). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 43 / 55 Select The linear time selection algorithm is complex. The constant hidden in
Θ(n) is large. It’s not a practical algorithm. The signiﬁcance:
It settled the complexity issue of a fundamental problem:
Cselection = Θ(n).
It illustrates two important algorithmic ideas: c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 43 / 55 Select The linear time selection algorithm is complex. The constant hidden in
Θ(n) is large. It’s not a practical algorithm. The signiﬁcance:
It settled the complexity issue of a fundamental problem:
Cselection = Θ(n).
It illustrates two important algorithmic ideas:
Random Sampling: randomly pick pivot x to partition the array. On
average, the algorithm takes Θ(n) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 43 / 55 Select The linear time selection algorithm is complex. The constant hidden in
Θ(n) is large. It’s not a practical algorithm. The signiﬁcance:
It settled the complexity issue of a fundamental problem:
Cselection = Θ(n).
It illustrates two important algorithmic ideas:
Random Sampling: randomly pick pivot x to partition the array. On
average, the algorithm takes Θ(n) time.
Derandomization: Make a clever choice of x, remove the
randomness. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 43 / 55 Select The linear time selection algorithm is complex. The constant hidden in
Θ(n) is large. It’s not a practical algorithm. The signiﬁcance:
It settled the complexity issue of a fundamental problem:
Cselection = Θ(n).
It illustrates two important algorithmic ideas:
Random Sampling: randomly pick pivot x to partition the array. On
average, the algorithm takes Θ(n) time.
Derandomization: Make a clever choice of x, remove the
randomness. These ideas are used in other algorithms. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 43 / 55 Outline
1 Divide and Conquer Strategy 2 Master Theorem 3 Matrix Multiplication 4 Strassen’s MM Algorithm 5 Complexity of a Problem 6 Selection Problem 7 Summary 8 Computational Geometry c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 44 / 55 Summary on Using DaC Strategy When divide into subproblems, the size of the sub-problems should be
n/b for some constant b > 1. If it is only n − c for some constant c, and
there are at least two subproblems, this usually leads to exp time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 45 / 55 Summary on Using DaC Strategy When divide into subproblems, the size of the sub-problems should be
n/b for some constant b > 1. If it is only n − c for some constant c, and
there are at least two subproblems, this usually leads to exp time. Example:
Fib(n)
1: if n = 0 return 0
2: if n = 1 return 1
3: else return Fib(n − 1)+Fib(n − 2) c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 45 / 55 Summary on DaC Strategy We have:
T (n) = c Xin He (University at Buffalo) O(1)
if n ≤ 1
T (n − 1) + T (n − 2) + O(1) if n ≥ 2 CSE 431/531 Algorithm Analysis and Design 46 / 55 Summary on DaC Strategy We have:
T (n) = O(1)
if n ≤ 1
T (n − 1) + T (n − 2) + O(1) if n ≥ 2 Thus:
T (n) ≥ T (n − 2)+ T (n − 2) = 2T (n − 2) ≥ 22 T (n − 2 · 2) ≥ · · · ≥ 2k T (n − 2 · k). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 46 / 55 Summary on DaC Strategy We have:
T (n) = O(1)
if n ≤ 1
T (n − 1) + T (n − 2) + O(1) if n ≥ 2 Thus:
T (n) ≥ T (n − 2)+ T (n − 2) = 2T (n − 2) ≥ 22 T (n − 2 · 2) ≥ · · · ≥ 2k T (n − 2 · k).
√
When k = n/2, we have: T (n) ≥ 2n/2 T (0) = Ω(( 2)n ) = Ω((1.414)n ). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 46 / 55 Summary on DaC Strategy We have:
T (n) = O(1)
if n ≤ 1
T (n − 1) + T (n − 2) + O(1) if n ≥ 2 Thus:
T (n) ≥ T (n − 2)+ T (n − 2) = 2T (n − 2) ≥ 22 T (n − 2 · 2) ≥ · · · 2k T (n − 2 · k).
√
When k = n/2, we have: T (n) ≥ 2n/2 T (0) = Ω(( 2)n ) = Ω((1.414)n ).
√ Actually, T (n) = Θ(αn ) where α = c Xin He (University at Buffalo) 5+1
2 ≈1.618. CSE 431/531 Algorithm Analysis and Design 46 / 55 Summary on DaC Strategy We have:
T (n) = O(1)
if n ≤ 1
T (n − 1) + T (n − 2) + O(1) if n ≥ 2 Thus:
T (n) ≥ T (n − 2)+ T (n − 2) = 2T (n − 2) ≥ 22 T (n − 2 · 2) ≥ · · · ≥ 2k T (n − 2 · k).
√
When k = n/2, we have: T (n) ≥ 2n/2 T (0) = Ω(( 2)n ) = Ω((1.414)n ).
√ Actually, T (n) = Θ(αn ) where α = 5+1
2 ≈1.618. We make two recursive calls, with size n − 1 and n − 2. This leads to
exp time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 46 / 55 Summary on DaC Strategy
When divide into sub-problems, try to divide them into about equal
sizes. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 47 / 55 Summary on DaC Strategy
When divide into sub-problems, try to divide them into about equal
sizes.
In the linear time select algorithm, we took great effort to ensure
the size of the sub problem is ≤ 7n/10. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 47 / 55 Summary on DaC Strategy
When divide into sub-problems, try to divide them into about equal
sizes.
In the linear time select algorithm, we took great effort to ensure
the size of the sub problem is ≤ 7n/10.
After we get T (n) = aT (n/b) + Θ(nk ). How to improve? c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 47 / 55 Summary on DaC Strategy
When divide into sub-problems, try to divide them into about equal
sizes.
In the linear time select algorithm, we took great effort to ensure
the size of the sub problem is ≤ 7n/10.
After we get T (n) = aT (n/b) + Θ(nk ). How to improve?
If logb a < k, then the cost of other processing dominates the
runtime. We must reduce it. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 47 / 55 Summary on DaC Strategy
When divide into sub-problems, try to divide them into about equal
sizes.
In the linear time select algorithm, we took great effort to ensure
the size of the sub problem is ≤ 7n/10.
After we get T (n) = aT (n/b) + Θ(nk ). How to improve?
If logb a < k, then the cost of other processing dominates the
runtime. We must reduce it.
If logb a > k, then the cost of recursive calls dominates the
runtime. We must reduce the number of recursive calls.
(Strassen’s algorithm). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 47 / 55 Summary on DaC Strategy
When divide into sub-problems, try to divide them into about equal
sizes.
In the linear time select algorithm, we took great effort to ensure
the size of the sub problem is ≤ 7n/10.
After we get T (n) = aT (n/b) + Θ(nk ). How to improve?
If logb a < k, then the cost of other processing dominates the
runtime. We must reduce it.
If logb a > k, then the cost of recursive calls dominates the
runtime. We must reduce the number of recursive calls.
(Strassen’s algorithm).
If logb a = k, then the cost of two parts are about same. To
improve, we must reduce both. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 47 / 55 Summary on DaC Strategy
When divide into sub-problems, try to divide them into about equal
sizes.
In the linear time select algorithm, we took great effort to ensure
the size of the sub problem is ≤ 7n/10.
After we get T (n) = aT (n/b) + Θ(nk ). How to improve?
If logb a < k, then the cost of other processing dominates the
runtime. We must reduce it.
If logb a > k, then the cost of recursive calls dominates the
runtime. We must reduce the number of recursive calls.
(Strassen’s algorithm).
If logb a = k, then the cost of two parts are about same. To
improve, we must reduce both. Quite often, when you reach this
point, you have the best algorithm! c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 47 / 55 Outline
1 Divide and Conquer Strategy 2 Master Theorem 3 Matrix Multiplication 4 Strassen’s MM Algorithm 5 Complexity of a Problem 6 Selection Problem 7 Summary 8 Computational Geometry c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 48 / 55 Computational Geometry
Computational Geometry
The branch of CS that studies geometry problems. It has applications
in Computer Graphics, Robotics, Motion Planning ... c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 49 / 55 Computational Geometry
Computational Geometry
The branch of CS that studies geometry problems. It has applications
in Computer Graphics, Robotics, Motion Planning ... Motion Planing
Given a set of polygons in 2D plane and two points a and b, ﬁnd the
shortest path from a to b, avoiding all polygons. a b c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 49 / 55 Computational Geometry
Hidden Surfaces Removal
Given a set of polygons in 3D space and a view point p, Identify the
portions of the polygons that can be seen from p. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 50 / 55 Computational Geometry
Hidden Surfaces Removal
Given a set of polygons in 3D space and a view point p, Identify the
portions of the polygons that can be seen from p. view point c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 50 / 55 Computational Geometry
Hidden Surfaces Removal
Given a set of polygons in 3D space and a view point p, Identify the
portions of the polygons that can be seen from p. view point Application: Computer Graphics.
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 50 / 55 Closest Point Pair Problem
Closest Point Pair Problem
Input: A set P = {p1 , p2 . . . pn } of n points (pi = (xi , yi )).
def
Find: i = j such that dist(pi , pj ) = [(xi − xj )2 + (yi − yj )2 ]1/2 is the smallest
among all point pairs. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 51 / 55 Closest Point Pair Problem
Closest Point Pair Problem
Input: A set P = {p1 , p2 . . . pn } of n points (pi = (xi , yi )).
def
Find: i = j such that dist(pi , pj ) = [(xi − xj )2 + (yi − yj )2 ]1/2 is the smallest
among all point pairs.
This is a basic problem in Computational Geometry. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 51 / 55 Closest Point Pair Problem
Closest Point Pair Problem
Input: A set P = {p1 , p2 . . . pn } of n points (pi = (xi , yi )).
def
Find: i = j such that dist(pi , pj ) = [(xi − xj )2 + (yi − yj )2 ]1/2 is the smallest
among all point pairs.
This is a basic problem in Computational Geometry.
Simple algorithm:
For each pair i = j, compute dist(pi , pj ).
Pick the pair with smallest distance. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 51 / 55 Closest Point Pair Problem
Closest Point Pair Problem
Input: A set P = {p1 , p2 . . . pn } of n points (pi = (xi , yi )).
def
Find: i = j such that dist(pi , pj ) = [(xi − xj )2 + (yi − yj )2 ]1/2 is the smallest
among all point pairs.
This is a basic problem in Computational Geometry.
Simple algorithm:
For each pair i = j, compute dist(pi , pj ).
Pick the pair with smallest distance.
Let f (n) be the time needed to evaluate dist(∗). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 51 / 55 Closest Point Pair Problem
Closest Point Pair Problem
Input: A set P = {p1 , p2 . . . pn } of n points (pi = (xi , yi )).
def
Find: i = j such that dist(pi , pj ) = [(xi − xj )2 + (yi − yj )2 ]1/2 is the smallest
among all point pairs.
This is a basic problem in Computational Geometry.
Simple algorithm:
For each pair i = j, compute dist(pi , pj ).
Pick the pair with smallest distance.
Let f (n) be the time needed to evaluate dist(∗).
Since there are Θ(n2 ) point pairs, this algorithm takes Θ(n2 f (n)) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 51 / 55 Closest Point Pair Problem
Closest Point Pair Problem
Input: A set P = {p1 , p2 . . . pn } of n points (pi = (xi , yi )).
def
Find: i = j such that dist(pi , pj ) = [(xi − xj )2 + (yi − yj )2 ]1/2 is the smallest
among all point pairs.
This is a basic problem in Computational Geometry.
Simple algorithm:
For each pair i = j, compute dist(pi , pj ).
Pick the pair with smallest distance.
Let f (n) be the time needed to evaluate dist(∗).
Since there are Θ(n2 ) point pairs, this algorithm takes Θ(n2 f (n)) time.
By using DaC, we get a Θ(n log nf (n)) time algorithm.
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 51 / 55 Closest Point Pair Problem
ClosestPair(P)
Input: The point set P is represented by X = [x1 , . . . xn ] and Y = [y1 , . . . yn ]. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 52 / 55 Closest Point Pair Problem
ClosestPair(P)
Input: The point set P is represented by X = [x1 , . . . xn ] and Y = [y1 , . . . yn ].
Preprocessing: Sort X ; sort Y . This takes O(n log n) time c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 52 / 55 Closest Point Pair Problem
ClosestPair(P)
Input: The point set P is represented by X = [x1 , . . . xn ] and Y = [y1 , . . . yn ].
Preprocessing: Sort X ; sort Y . This takes O(n log n) time
1: If n ≤ 4, ﬁnd the shortest point pair directly. This takes O(1) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 52 / 55 Closest Point Pair Problem
ClosestPair(P)
Input: The point set P is represented by X = [x1 , . . . xn ] and Y = [y1 , . . . yn ].
Preprocessing: Sort X ; sort Y . This takes O(n log n) time
1: If n ≤ 4, ﬁnd the shortest point pair directly. This takes O(1) time.
2: Divide the point set P into two parts as follows: Draw a vertical line l that
divides P into PL (points to the left of l), and PR (points to the right of l), so that
|PL | = n/2 and |PR | = n/2 .
Note: Since X is already sorted, we can draw l between x n/2 and x n/2 +1 .
We scan X and collect points into PL and PR . This takes O(n) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 52 / 55 Closest Point Pair Problem
ClosestPair(P)
Input: The point set P is represented by X = [x1 , . . . xn ] and Y = [y1 , . . . yn ].
Preprocessing: Sort X ; sort Y . This takes O(n log n) time
1: If n ≤ 4, ﬁnd the shortest point pair directly. This takes O(1) time.
2: Divide the point set P into two parts as follows: Draw a vertical line l that
divides P into PL (points to the left of l), and PR (points to the right of l), so that
|PL | = n/2 and |PR | = n/2 .
Note: Since X is already sorted, we can draw l between x n/2 and x n/2 +1 .
We scan X and collect points into PL and PR . This takes O(n) time.
3: Recursively call ClosestPair(PL ). Let
pLi , pLj be the point pair with smallest distance in PL .
δL = dist(pLi , pLj ).
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 52 / 55 Closest Point Pair Problem
4: Recursively call ClosestPair(PR ). Let
pRi , pRj be the point pair with smallest distance in PR .
δR = dist(pRi , pRj ). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 53 / 55 Closest Point Pair Problem
4: Recursively call ClosestPair(PR ). Let
pRi , pRj be the point pair with smallest distance in PR .
δR = dist(pRi , pRj ).
5: Let δ = min{δL , δR }. This takes O(1) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 53 / 55 Closest Point Pair Problem
4: Recursively call ClosestPair(PR ). Let
pRi , pRj be the point pair with smallest distance in PR .
δR = dist(pRi , pRj ).
5: Let δ = min{δL , δR }. This takes O(1) time.
6: Combine. The solution of the original problem must be one of three cases: c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 53 / 55 Closest Point Pair Problem
4: Recursively call ClosestPair(PR ). Let
pRi , pRj be the point pair with smallest distance in PR .
δR = dist(pRi , pRj ).
5: Let δ = min{δL , δR }. This takes O(1) time.
6: Combine. The solution of the original problem must be one of three cases:
The closest pair pi , pj are both in PL . We have solved this case in (3). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 53 / 55 Closest Point Pair Problem
4: Recursively call ClosestPair(PR ). Let
pRi , pRj be the point pair with smallest distance in PR .
δR = dist(pRi , pRj ).
5: Let δ = min{δL , δR }. This takes O(1) time.
6: Combine. The solution of the original problem must be one of three cases:
The closest pair pi , pj are both in PL . We have solved this case in (3).
The closest pair pi , pj are both in PR . We have solved this case in (4). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 53 / 55 Closest Point Pair Problem
4: Recursively call ClosestPair(PR ). Let
pRi , pRj be the point pair with smallest distance in PR .
δR = dist(pRi , pRj ).
5: Let δ = min{δL , δR }. This takes O(1) time.
6: Combine. The solution of the original problem must be one of three cases:
The closest pair pi , pj are both in PL . We have solved this case in (3).
The closest pair pi , pj are both in PR . We have solved this case in (4).
One of {pi , pj } is in PL and another one is in PR . We must ﬁnd the
solution in this case. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 53 / 55 Closest Point Pair Problem
4: Recursively call ClosestPair(PR ). Let
pRi , pRj be the point pair with smallest distance in PR .
δR = dist(pRi , pRj ).
5: Let δ = min{δL , δR }. This takes O(1) time.
6: Combine. The solution of the original problem must be one of three cases:
The closest pair pi , pj are both in PL . We have solved this case in (3).
The closest pair pi , pj are both in PR . We have solved this case in (4).
One of {pi , pj } is in PL and another one is in PR . We must ﬁnd the
solution in this case.
Note: Let S be the vertical strip with width 2δ centered at the line l. Then pi
and pj must be in S. (Why?)
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 53 / 55 S S S
δ S
qj A B δ q i δ c Xin He (University at Buffalo) l δ δ l CSE 431/531 Algorithm Analysis and Design δ 54 / 55 S S S
δ S
qj A B δ q i δ l δ δ l δ 6.1: Let P = {q1 , q2 , . . . , qt } be the points in the 2δ wide strip S. Let Y be the
y-coordinates of points in P in increasing order. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 54 / 55 S S S
δ S
qj A B δ q i δ l δ δ l δ 6.1: Let P = {q1 , q2 , . . . , qt } be the points in the 2δ wide strip S. Let Y be the
y-coordinates of points in P in increasing order.
Note: Since Y is already sorted, we scan Y , and only include the points that
are in the strip S. This takes O(n) time. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 54 / 55 S S S
δ S
qj A B δ q i δ l δ δ l δ 6.1: Let P = {q1 , q2 , . . . , qt } be the points in the 2δ wide strip S. Let Y be the
y-coordinates of points in P in increasing order.
Note: Since Y is already sorted, we scan Y , and only include the points that
are in the strip S. This takes O(n) time.
6.2: For each qi (i = 1 . . . t) in P , compute dist(qi , qj ) where i < j ≤ i + 7. Let δ
be the smallest distance computed in this step. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 54 / 55 S S S
δ S
qj A B δ q i δ l δ δ l δ 6.1: Let P = {q1 , q2 , . . . , qt } be the points in the 2δ wide strip S. Let Y be the
y-coordinates of points in P in increasing order.
Note: Since Y is already sorted, we scan Y , and only include the points that
are in the strip S. This takes O(n) time.
6.2: For each qi (i = 1 . . . t) in P , compute dist(qi , qj ) where i < j ≤ i + 7. Let δ
be the smallest distance computed in this step.
Note: If (qi , qj ) is the closest pair, then both must be in the region A or B (qi is
at the bottom edge). But any two points in A have inter-distance at least δ . A
can contain at most 4 points. Similarly B can contain at most 4 points. So we
only need to compare dist between qi and next 7 points in P !
c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 54 / 55 Closest Point Pair Problem 6.3: If δ < δ , the shortest distance computed in (6.2) is the shortest distance
for the original problem.
If δ ≥ δ , the shortest distance computed in (3) or (4) is the shortest distance
for the original problem.
Output accordingly. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 55 / 55 Closest Point Pair Problem 6.3: If δ < δ , the shortest distance computed in (6.2) is the shortest distance
for the original problem.
If δ ≥ δ , the shortest distance computed in (3) or (4) is the shortest distance
for the original problem.
Output accordingly.
Analysis: Let T (n) be the number of computation of dist(*) by the alg. The
algorithm makes two recursive calls, each with size n/2. All other processing
takes O(n) time. Thus:
T (n) = c Xin He (University at Buffalo) O(1)
if n ≤ 4
2T (n/2) + Θ(n) if n > 4 CSE 431/531 Algorithm Analysis and Design 55 / 55 Closest Point Pair Problem 6.3: If δ < δ , the shortest distance computed in (6.2) is the shortest distance
for the original problem.
If δ ≥ δ , the shortest distance computed in (3) or (4) is the shortest distance
for the original problem.
Output accordingly.
Analysis: Let T (n) be the number of computation of dist(*) by the alg. The
algorithm makes two recursive calls, each with size n/2. All other processing
takes O(n) time. Thus:
T (n) = O(1)
if n ≤ 4
2T (n/2) + Θ(n) if n > 4 Thus: T (n) = Θ(n log n). c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 55 / 55 ...

View
Full
Document