One of the most commonly used sorting algorithms is quicksort. While the worst case
run time of quicksort is O(n2), the average run time is O(n lg n) but typically with a
smaller constant than merge or heap sorts. Like heapsort, quicksort also operates in
Proof of Correctness
In order to prove the correctness of an algorithm, i.e. that it will work for any input
set, we construct a loop invariant which is a condition that holds for each iteration of
the loop (thus the name invariant). Then the proof consis
Worst Case
The worst case (unlike insertion sort) occurs when the array is already sorted. In this
case the pivot element is always the largest element at each step and thus the array is
partitioned into subarrays of size n-1 and 0. This gives the followi
Analysis Issues
WHAT DATA STRUCTURES TO USE! (lists, queues, stacks,
heaps, trees, etc.)
2.
IS IT CORRECT! (all or only most of the time?)
3.
HOW EFFICIENT IS IT! (asymptotically fixed or does it
depend on the inputs?)
4.
IS THERE AN EFFICIENT ALGORITHM!
Formal mathematical notation can be used to both rigorously define the asymptotic
behavior for a particular algorithm as well as serve as a mechanism
tocompare algorithms. Thus typically the asymptotic behavior is used to describe
the efficiency of an alg
In order to formulate a good "guess" at a solution to use in the substitution method,
a recursion tree can often be constructed to find an approximation to the run time.
Similar to the one constructed for merge sort, the procedure consists of:
Construct t
Counting Sort
Since sorting by definition is deriving an ordering for elements, how can the order be
determined without comparing the elements? If the elements are integers in the range
0 to k, the answer comes in using the elements not only as values but
For problems that have a fixed number of identical sized recursive pieces there is a
much simpler method known as the master theorem that can be used to solve certain
recursive equations almost "by inspection".
Master Theorem
The master theorem can be emp
o() - ("Little O")
A weaker (non-asymptotically tight) upper bound that is often useful and easier to
determine is defined for two functions f(n) and g(n) as
This upper bound can be shown by the following limit
For example, it can easily be shown that
sin
Heap values
We define the following values for an array A viewed as a heap
A.length - size of the array
A.heapsize - number of elements in the heap
(note A.heapsize A.length)
height - height of the heap binary tree = (lg n)
Heap operations
For heaps we as
MERGE()
For MERGE()
Input: A[p.q] and A[q+1.r] are sorted
Output: A[p.r] is sorted
(1) Lines 1-3 - compute two values and allocate memory
(n) Lines 4-7 - copy elements from A[] into L[] and R[]
(1) Lines 8-11 - add sentinal values and initialize loop coun