course-notes - Lecture 1 Introduction to Algorithms...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Lecture 1 Introduction to Algorithms Analysis Im assuming youve all had CSCE 350 or the equivalent. Ill assume some basic things from there, and sometimes quickly review the more important/subtle points. Well start with Appendix A in CLRS Summations. Why? They are essential tools in analyzing the complexity of algorithms. A First Example Consider the following two C code fragments: /* Fragment 1 */ sum = 0; for (i=1; i<n; i*=2) for (j=0; j<i; j++) sum++; and /* Fragment 2 */ sum = 0; for (i=1; i<n; i*=2) for (j=i; j<n; j++) sum++; Note the subtle difference. Is there a difference in running time (order-of-magnitude as a function of n )? Yes there is. Sample 1 runs in time ( n ) and Sample 2 runs in time ( n log n ), so Sample 2 runs significantly longer. [Recall: f = O ( g ) means f ( n ) is at most a constant times g ( n ) for all n large enough. f = ( g ) means f ( n ) is at least a (positive) constant times g ( n ) for all n large enough. (Equivalently, g = O ( f ).) f = ( g ) means both f = O ( g ) and f = ( g ). f = ( g ) is an equivalence relation between f and g . Also, log n means log 2 n .] Heres the intuition: in both fragments, the variable i does not run from 1 to n at an even pace. Since it doubles each time, it spends most of its time being very small compared to n , which makes the first j-loop run faster and the second j-loop run slower. Lets analyze the running times more rigorously. We generally dont care about constant factors, so it is enough to find, for each fragment, an upper bound and a lower bound that are within a constant factor of each other. This looseness usually makes life a lot easier for us, since we dont have to be exact. 1 Claim 1 The running time for Fragment 2 is O ( n log n ) . Proof The body of the inner loop ( j-loop) takes O (1) time. Each time it runs, the j-loop iterates n- i n times, for a time of O ( n ) per execution. The outer i-loop runs about log n many times (actually, exactly d log n e many times). So the total time for the fragment (including initialization, loop testing, and increment) is O ( n log n ). 2 Claim 2 The running time for Fragment 2 is ( n log n ) . Proof Note that for all iterations of the i-loop except the last one, the j-loop iterates at least n/ 2 times (because i < n/ 2 and thus n- i > n- n/ 2 = n/ 2). Thus the sum variable is incremented at least n 2 (log n- 1) times total, which is clearly ( n log n ). 2 Claim 3 The running time for Fragment 1 is ( n ) . Proof To get a lower bound, we only need to look at the last iteration of the i-loop! (Digression: the late Paul Erd os, arguably the greatest mathematician of the 20th century, once described the art of mathematical analysis as knowing what information you can throw away.) The value of i in the last i-loop iteration must be at least n/ 2. In this iteration (as in all iterations), j runs from 0 through i- 1, so the sum variable is incremented i...
View Full Document

This note was uploaded on 12/13/2011 for the course CSCE 750 taught by Professor Fenner during the Fall '11 term at South Carolina.

Page1 / 85

course-notes - Lecture 1 Introduction to Algorithms...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online