Cach1 - Welcome to Part 3 Memory Systems and I/O We've...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
1 Welcome to Part 3: Memory Systems and I/O ± We’ve already seen how to make a fast processor. How can we supply the CPU with enough data to keep it busy? ± We will now focus on memory issues, which are frequently bottlenecks that limit the performance of a system. ± We’ll start off by looking at memory systems in the remaining lectures. Memory Processor Input/Output
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
2 Cache introduction ± Today we’ll answer the following questions. What are the challenges of building big, fast memory systems? What is a cache? Why caches work? (answer: locality) How are caches organized? • Where do we put things -and- how do we find them?
Background image of page 2
3 Large and fast ± Today’s computers depend upon large and fast storage systems. Large storage capacities are needed for many database applications, scientific computations with large data sets, video and music, and so forth. Speed is important to keep up with our pipelined CPUs, which may access both an instruction and data in the same clock cycle. Things get even worse if we move to a superscalar CPU design. ± So far we’ve assumed our memories can keep up and our CPU can access memory in one cycle, but as we’ll see that’s a simplification.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
4 Small or slow ± Unfortunately there is a tradeoff between speed, cost and capacity. ± Fast memory is too expensive for most people to buy a lot of. ± But dynamic memory has a much longer delay than other functional units in a datapath. If every lw or sw accessed dynamic memory, we’d have to either increase the cycle time or stall frequently. ± Here are rough estimates of some current storage parameters. Storage Speed Cost Capacity Static RAM Fastest Expensive Smallest Dynamic RAM Slow Cheap Large Hard disks Slowest Cheapest Largest Storage Delay Cost/MB Capacity Static RAM 1-10 cycles ~$10 128KB-2MB Dynamic RAM 100-200 cycles ~$0.20 128MB-4GB Hard disks 10,000,000 cycles ~$0.001 20GB-200GB
Background image of page 4
5 How to Create the Illusion of Big and Fast ± Memory hierarchy – put small and fast memories closer to CPU, large and slow memories further away CPU Level n Level 2 Level 1 Levels in the memory hierarchy Increasing distance from the CPU in access time Size of the memory at each level
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
6 Introducing caches ± Introducing a cache – a small amount of fast, expensive memory. The cache goes between the processor and the slower, dynamic main memory. It keeps a copy of the most frequently used data from the main memory. ± Memory access speed increases overall, because we’ve made the common case faster. Reads and writes to the most frequently used addresses will be serviced by the cache. We only need to access the slower main memory for less frequently used data.
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 31

Cach1 - Welcome to Part 3 Memory Systems and I/O We've...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online