# day5 - COP 3503 Computer Science II CLASS NOTES DAY#5...

This preview shows pages 1–4. Sign up to view the full content.

COP 3503 – Computer Science II CLASS NOTES - DAY #5 General Big-Oh Rules Def.(Big-Oh): T(N) is O(F(N)) if there are positive constants c and N o such that T(N) cF(N) when N N o . [an upper bound] Def. (Big-Omega): T(N) is (F(N)) if there are positive constants c and N o such that T(N) cF(N) when N N o . [a lower bound] Def. (Big-Theta): T(N) is θ (F(N)) if and only if T(N) is O(F(N)) and T(N) is (F(N)). [an upper and lower bound] Def. (Little-Oh): T(N) is o(F(N)) if and only if T(N) is O(F(N)) and T(N) is not θ (F(N)). [a strict upper bound] Mathematical Expression Relative Rates of Growth T(N) = O(F(N)) Growth of T(N) is growth of F(N) T(N) = (F(N)) Growth of T(N) is growth of F(N) T(N) = θ (F(N)) Growth of T(N) is = growth of F(N) T(N) = o(F(N)) Growth of T(N) is < growth of F(N) Big-Oh analysis does not yield an exact asymptotic answer, but rather an upper bound. We are mostly interested in worst case analysis over all inputs of size N. Sometimes, however, we are interested in the average case where the running time is measured as an average over all inputs of size N. Verifying an Algorithmic Analysis T(N) is the empirical (observed) running time of the code and the claim is made that T(N) O(F(N)). Day 5 - 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Technique is to compute a series of values T(N)/F(N) for a range of N (commonly spaced out by a factors of two). Depending upon these values of T(N)/F(N) we can determine how accurate our estimation for F(N) is according to: is a tight answer if the values converge to a positive constant F(N) = is an overestimate if the values converge to zero (stabilizes near 0) is an underestimate if the values diverge (more rapidly + or -) Examples Example 1 Consider the following table of data obtained from running an instance of an algorithm assumed to be cubic. Decide if the Big-Oh estimate, O(N 3 ) is accurate. Run N T(N) 1 100 0.017058 ms 2 1000 17.058 ms 3 5000 2132.2464 ms 4 10000 17057.971 ms 5 50000 2132246.375 ms T(N)/F(N) = 0.017058/(100*100*100) = 1.0758 × 10 -8 T(N)/F(N) = 17.058/(1000*1000*1000) = 1.0758 × 10 -8 T(N)/F(N) = 2132.2464/(5000*5000*5000) = 1.0757 × 10 -8 T(N)/F(N) = 17057.971/(10000*10000*10000) = 1.0757 × 10 -8 T(N)/F(N) = 2132246.375/(50000*50000*50000) = 1.0757 × 10 -8 The calculated values converge to a positive constant (1.0757 × 10 -8 ) – so the estimate of O(n 3 ) is a tight estimate. Day 5 - 2
Example 2 Consider the following table of data obtained from running an instance of an algorithm assumed to be quadratic. Decide if the Big-Oh estimate, O(N 2 ) is accurate. Run N T(N) 1 100 0.00016 ms 2 1000 0.01638 ms 3 10000 1.6478 ms 4 100000 166.0177 ms 5 1000000 17057.971 ms T(N)/F(N) = 0.00016/(100 * 100) = 1.6 × 10 -8 T(N)/F(N) = 0.01638/(1000 * 1000) = 1.638 × 10 -8 T(N)/F(N) = 1.6478/(10000 * 10000) = 1.6478 × 10 -8 T(N)/F(N) = 166.0177/(100000 * 100000) = 1.6601 × 10 -8 T(N)/F(N) = 17057.971/(1000000 * 1000000) = 1.70579 × 10 -8 The values diverge, so the estimate of O(n 2 ) is an underestimate. Limitations of Big-Oh Notation not useful for small sizes of input sets omission of the constants can be misleading – example 2NlogN and 1000N, even though its growth rate is larger the first function is probably better. Constants also reflect things like memory access and disk access.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 9

day5 - COP 3503 Computer Science II CLASS NOTES DAY#5...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online