MIT6_851S10_lec20

MIT6_851S10_lec20 - 1 2 6.851: Advanced Data Structures...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 2 6.851: Advanced Data Structures Spring 2010 Lecture 20 — 22 April, 2010 Prof. Erik Demaine Memory Hierarchies and Models of Them So far in class, we have worked with models of computation like the word RAM or cell probe models. These models account for communication with memory one word at a time: if we need to read 10 words, it costs 10 units. On modern computers, this is virtually never the case. Modern computers have a memory hier- archy to attempt to speed up memory operations. The typical levels in the memory hiearchy are: Memory Level Size Response Time CPU registers ≈ 100B ≈ 0.5ns L1 Cache ≈ 64KB ≈ 1ns L2 Cache ≈ 1MB ≈ 10ns Main Memory ≈ 2GB ≈ 150ns Hard Disk ≈ 1TB ≈ 10ms It is clear that the fastest memory levels are substantially smaller than the slowest ones. Generally, each level has a direct connection to the level immediately below it. (In addition, the faster, smaller levels are substantially more expensive to produce, so do not expect 1GB of register space any time soon.) Additionally, many of the levels communicate in blocks. For example, asking RAM to read one integer will typically also transmit a “block” of nearby data. So processing the block members costs no additional memory transfers. This issue is exacerbated when communicating with the disk: the 10ms is dominated by the time needed to find the data (move the read head over the disk). Modern disks are circular, spinning at 7200rpm, so once the head is in position, reading all of the data on that “ring” is practically free. This speaks to a need for algorithms that are designed to deal with “blocks” of data. Algorithms that properly take advantage of the memory hiearchy will be much faster in practice; and memory models which correctly describe the hiearchy will be more useful for analysis. We will see some fundamental models with some associated results today. External Memory Model The external memory model was introduced by Aggarwal and Vitter in 1988 [1]; it is also called the “I/O Model” or the “Disk Access Model” (DAM). The external memory model simplifies the memory hierachy to just two levels. The CPU is connected to a fast cache of size M ; this cache in turn is connected to a slower disk of effectively infinite size. Both cache and disk are divided into blocks of size B . Reading and writing one block from cache to disk costs 1 unit. Operations on blocks in RAM are free. Clearly any algorithm from say the word RAM model with running time T ( N ) requires no worse 1 than T ( N ) memory transfers in the external memory model (at most one memory transfer per operation). The lower bound, which is usually harder to obtain, is T ( B N ) , where we take perfect advantage of cache locality; i.e., each block is only read/written a constant number of times....
View Full Document

This note was uploaded on 03/31/2011 for the course EECS 6.851 taught by Professor Erikdemaine during the Spring '10 term at MIT.

Page1 / 8

MIT6_851S10_lec20 - 1 2 6.851: Advanced Data Structures...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online