lect7-recurrences

# lect7-recurrences - Lecture Notes CMSC 251 Analysis What...

This preview shows pages 1–2. Sign up to view the full content.

Lecture Notes CMSC 251 Analysis: What remains is to analyze the running time of MergeSort. First let us consider the running time of the procedure Merge(A, p, q, r) . Let n = r - p +1 denote the total length of both the left and right subarrays. What is the running time of Merge as a function of n ? The algorithm contains four loops (none nested in the other). It is easy to see that each loop can be executed at most n times. (If you are a bit more careful you can actually see that all the while-loops together can only be executed n times in total, because each execution copies one new element to the array B , and B only has space for n elements.) Thus the running time to Merge n items is Θ( n ) . Let us write this without the asymptotic notation, simply as n . (We’ll see later why we do this.) Now, how do we describe the running time of the entire MergeSort algorithm? We will do this through the use of a recurrence , that is, a function that is deﬁned recursively in terms of itself. To avoid circularity, the recurrence for a given value of n is deﬁned in terms of values that are strictly smaller than n . Finally, a recurrence has some basis values (e.g. for n =1 ), which are deﬁned explicitly. Let’s see how to apply this to MergeSort. Let T ( n ) denote the worst case running time of MergeSort on an array of length n . For concreteness we could count whatever we like: number of lines of pseudocode, number of comparisons, number of array accesses, since these will only differ by a constant factor. Since all of the real work is done in the Merge procedure, we will count the total time spent in the Merge procedure. First observe that if we call MergeSort with a list containing a single element, then the running time is a constant. Since we are ignoring constant factors, we can just write T ( n )=1 . When we call MergeSort with a list of length n> 1 , e.g. Merge(A, p, r) , where r - p +1 = n , the algorithm ﬁrst computes q = b ( p + r ) / 2 c . The subarray A [ p..q ] , which contains q - p elements. You can verify (by some tedious ﬂoor-ceiling arithmetic, or simpler by just trying an odd example and an even example) that is of size d n/ 2 e . Thus the remaining subarray A [ q ..r ] has b n/ 2 c elements in it. How long does it take to sort the left subarray? We do not know this, but because d n/ 2 e <n for 1 , we can express this as T ( d n/ 2 e ) . Similarly, we can express the time that it takes to sort the right subarray as T ( b n/ 2 c ) . Finally, to merge both sorted lists takes n time, by the comments made above. In conclusion we have T ( n )= ± 1 if n , T ( d n/ 2 e )+ T ( b n/ 2 c n otherwise. Lecture 7: Recurrences (Tuesday, Feb 17, 1998) Read: Chapt. 4 on recurrences. Skip Section 4.4. Divide and Conquer and Recurrences: Last time we introduced divide-and-conquer as a basic technique for designing efﬁcient algorithms. Recall that the basic steps in divide-and-conquer solution are (1) divide the problem into a small number of subproblems, (2) solve each subproblem recursively, and (3) combine the solutions to the subproblems to a global solution. We also described MergeSort, a sorting algorithm based on divide-and-conquer.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 01/13/2012 for the course CMSC 351 taught by Professor Staff during the Fall '11 term at University of Louisville.

### Page1 / 5

lect7-recurrences - Lecture Notes CMSC 251 Analysis What...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online