lecture11

lecture11 - Wednesday,May10 AlgorithmicComplexity...

Info iconThis preview shows pages 1–9. Sign up to view the full content.

View Full Document Right Arrow Icon
Wednesday, May 10 th Algorithmic Complexity Sorting Algorithms, part I
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Algorithmic Complexity In computer science, we need to have a way to compare algorithms to each  other: e.g.,  which algorithm is most efficient for a given problem of a given size. Typically, we measure an algorithm’s efficiency based on either (a)  how fast it  runs , or (b)  how much memory it uses  for a  problem of a given size . We use a concept called “big-oh” to give us a  rough upper bound  on how  many operations an algorithm requires to process  n  items.
Background image of page 2
Big-Oh: The Concept Big-oh Idea:  Use simple functions like  log(n) n n 2 n log(n) n 3 , etc. to convey  how many operations an algorithm must perform to process  items. “That sorting algorithm is  O(n 2 ) , so to sort  n=1000  items it requires roughly  million  operations.” “That sorting algorithm is  O(n · log 2 n) , so to sort  n=1000  items requires roughly  10,000  operations.” This allows us to easily compare two different algorithms:
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
By  operations , we mean any of the following: 1. Accessing an item (e.g. an item in an array) 2. Evaluating a mathematical expression 3. Traversing a single link in a linked list, etc… So how do we compute the big-oh of a function? First, we need to determine the number of  operations  an algorithm  performs.  Let’s call this  f(n) . Let’s see how to evaluate the number of operations for a simple example… Big-Oh
Background image of page 4
int arr[n][n]; for ( int i = 0; i <  n ; i++ )    for ( int j = 0; j <  n ; j++ )        arr[i][j] = 0; Compute  f(n) , the # of critical  operations, that this algorithm  performs?  2.  Our algorithm performs  n  comparisons between  i  and  n . f(n) =  + n 3.  Our algorithm increments the variable  i n  times. + n 1.  Our algorithm initializes the value of  i  once. 1 4.  Our algorithm initializes the value of  j n  different times. + n Now that we have  f(n) , we can compute our algorithm’s big-oh.  Big-Oh
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Big-oh Complexity Here are the steps to compute the  big-oh  of an algorithm: 1. Determine how many steps  f(n)  an algorithm requires to solve a problem,  in terms of the number of items  n . 1. Keep the most significant term of that function and throw away the rest.  For example:
Background image of page 6
Big-oh I really mean:  “To process  n  items, this algorithm requires  0 500 1000 1500 2000 2500 3000 3500 4000 1 10 20 30 40 50 60 70 80 90 100 110 Number of items, n Number of operations f(n)=n 2 f(n)=n log n f(n)=n By using only the most  significant term (e.g.  n  from 
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Big-oh Complexity log 2 n N n log 2 n n 2 n 3 2 n 3 10 30 100 1000 1000 6 100 600 10,000 1,000,000 10 30 9 1,000 9,000 1,000,000 1,000,000,000 10 301 13 10,000 130,000 100,000,000 10 12 WOW!
Background image of page 8
Image of page 9
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 10/20/2009 for the course CS 32 taught by Professor Davidsmallberg during the Winter '08 term at UCLA.

Page1 / 31

lecture11 - Wednesday,May10 AlgorithmicComplexity...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online