This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Lecture Notes CMSC 251 To bound this, recall the integration formula for bounding summations (which we paraphrase here). For any monotonically increasing function f ( x ) b 1 X i = a f ( i ) Z b a f ( x ) dx. The function f ( x ) = x ln x is monotonically increasing, and so we have S ( n ) Z n 2 x ln xdx. If you are a calculus macho man, then you can integrate this by parts, and if you are a calculus wimp (like me) then you can look it up in a book of integrals Z n 2 x ln xdx = x 2 2 ln x x 2 4 n x =2 = n 2 2 ln n n 2 4 (2 ln 2 1) n 2 2 ln n n 2 4 . This completes the summation bound, and hence the entire proof. Summary: So even though the worstcase running time of QuickSort is ( n 2 ) , the averagecase running time is ( n log n ) . Although we did not show it, it turns out that this doesnt just happen much of the time. For large values of n , the running time is ( n log n ) with high probability. In order to get ( n 2 ) time the algorithm must make poor choices for the pivot at virtually every step. Poor choices are rare, and so continuously making poor choices are very rare. You might ask, could we make QuickSort deterministic ( n log n ) by calling the selection algorithm to use the median as the pivot. The answer is that this would work, but the resulting algorithm would be so slow practically that no one would ever use it. QuickSort (like MergeSort) is not formally an inplace sorting algorithm, because it does make use of a recursion stack. In MergeSort and in the expected case for QuickSort, the size of the stack is O (log n ) , so this is not really a problem. QuickSort is the most popular algorithm for implementation because its actual performance (on typical modern architectures) is so good. The reason for this stems from the fact that (unlike Heapsort) which can make large jumps around in the array, the main work in QuickSort (in partitioning) spends most of its time accessing elements that are close to one another. The reason it tends to outperform MergeSort (which also has good locality of reference) is that most comparisons are made against the pivot element, which can be stored in a register. In MergeSort we are always comparing two array elements against each other. The most efficient versions of QuickSort uses the recursion for large subarrays, but once the sizes of the subarray falls below some minimum size (e.g. 20) it switches to a simple iterative algorithm, such as selection sort. Lecture 16: Lower Bounds for Sorting (Thursday, Mar 19, 1998) Read: Chapt. 9 of CLR. Review of Sorting: So far we have seen a number of algorithms for sorting a list of numbers in ascending order. Recall that an inplace sorting algorithm is one that uses no additional array storage (however, we allow QuickSort to be called inplace even though they need a stack of size O (log n ) for keeping track of the recursion). A sorting algorithm is stable if duplicate elements remain in the same relative position after sorting....
View
Full
Document
 Fall '11
 Staff

Click to edit the document details