ChapterSeven-Nine2003

ChapterSeven-Nine2003 - Chapter Seven 1 Memories: Review...

Info iconThis preview shows pages 1–10. Sign up to view the full content.

View Full Document Right Arrow Icon
1 Chapter Seven
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: value is stored as a charge on capacitor (must be refreshed) very small but slower than SRAM (factor of 5 to 10) Memories: Review B A A B Word line Pass transistor Capacitor Bit line
Background image of page 2
3 Users want large and fast memories! SRAM access times are 2 - 25ns at cost of $100 to $250 per Mbyte. DRAM access times are 60-120ns at cost of $5 to $10 per Mbyte. Disk access times are 10 to 20 million ns at cost of $.10 to $.20 per Mbyte. Try and give it to them anyway build a memory hierarchy Exploiting Memory Hierarchy 1997 CPU Level n Level 2 Level 1 Levels in the memory hierarchy Increasing distance from the CPU in access time Size of the memory at each level
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
4 Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality: it will tend to be referenced again soon spatial locality: nearby items will tend to be referenced soon. Why does code have locality? Our initial focus: two levels (upper, lower) block: minimum unit of data hit: data requested is in the upper level miss: data requested is not in the upper level
Background image of page 4
5 Two issues: How do we know if a data item is in the cache? If it is, how do we find it? Our first example: block size is one word of data "direct mapped" For each item of data at the lower level, there is exactly one location in the cache where it might be. e.g., lots of items at the lower level share locations in the upper level Cache
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
6 Mapping: address is modulo the number of blocks in the cache Direct Mapped Cache 00001 00101 01001 01101 10001 10101 11001 11101 000 Cache Memory 001 010 011 100 101 110 111
Background image of page 6
7 For MIPS: What kind of locality are we taking advantage of? Direct Mapped Cache Address (showing bit positions) 20 10 Byte offset Valid Tag Data Index 0 1 2 1021 1022 1023 Tag Index Hit Data 20 32 31 30 13 12 11 2 1 0
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
8 Read hits this is what we want! Read misses stall the CPU, fetch block from memory, deliver to cache, restart Write hits: can replace data in cache and memory (write-through) write the data only into the cache (write-back the cache later) Write misses: read the entire block into the cache, then write the word Hits vs. Misses
Background image of page 8
9 Make reading multiple words easier by using banks of memory It can get a lot more complicated. .. Hardware Issues
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 10
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 29

ChapterSeven-Nine2003 - Chapter Seven 1 Memories: Review...

This preview shows document pages 1 - 10. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online