lec16 - Lecture 16 Simple vs. more sophisticated algorithm...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
Page 1 of 26 CSE 100, UCSD: LEC 16 Lecture 16 Simple vs. more sophisticated algorithm cost analysis The cost of accessing memory B-trees B-tree performance analysis B-tree find, insert, and delete operations B-tree example: a 2-3 tree Reading: Weiss, Ch. 4 section 7
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Page 2 of 26 CSE 100, UCSD: LEC 16 Sophisticated cost analysis In ordinary algorithmic analysis, you look at the time or space cost of doing a single operation Ordinary, simple algorithmic analysis does not give a useful picture in some cases For example, operations on self-adjusting structures can have poor worst-case performance for a single operation, but the total cost of a sequence of operations is very good Amortized cost analysis is a more sophisticated analysis which may be more appropritate in this case In ordinary algorithmic analysis, you also assume that any “direct” memory access takes the same amount of time, no matter what This also does not give a useful picture in some cases, and we will need a more sophisticated form of analysis
Background image of page 2
Page 3 of 26 CSE 100, UCSD: LEC 16 Memory accesses Suppose you are accessing elements of an array: if ( a[i] < a[j] ) { ... or suppose you are dereferencing pointers: temp.next.next = elem.prev.prev; ... or in general reading or writing the values of variables: disc = x*x / (4 * a * c); In simple algorithmic analysis, each of these variable accesses is assumed to have the same, constant, time cost However, in reality this assumption may not hold Accessing a variable may in fact have very different time costs, depending where in the memory hierarchy that variable happens to be stored
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Page 4 of 26 CSE 100, UCSD: LEC 16 The memory hierarchy In a typical computer, there is a lot of memory This memory is of different types: CPU registers, level 1 and level 2 cache, main memory (RAM), hard disk, etc. This memory is organized in a hierarchy As you move down the hierarchy, memory is cheaper, slower, and there is more of it Differences in memory speeds can be very dramatic, and so it can be very important for algorithmic analysis to take memory speed into account
Background image of page 4
Page 5 of 26 CSE 100, UCSD: LEC 16 Typical memory hierarchy: a picture CPU CPU registers cache main memory disk AMOUNT OF STORAGE (approx!) TIME TO ACCESS (approx!) hundreds of bytes 1 nanosecond hundreds of kilobytes hundreds of megabytes 10 nanoseconds 100 nanoseconds 10 milliseconds hundreds of gigabytes
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Page 6 of 26 CSE 100, UCSD: LEC 16 Consequences of the memory hierarchy Accessing a variable can be fast or slow, depending on various factors If a variable is in slow memory, acessing it will be slow However, when it is accessed, the operating system will typically move that variable to faster memory (“cache” or “buffer” it), along with some nearby variables The idea is: if a variable is accessed once in a program, it (and nearby variables) is likely to be accessed again
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 26

lec16 - Lecture 16 Simple vs. more sophisticated algorithm...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online