lect7 memory design

lect7 memory design - Memory Locality Memory hierarchies...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
CSE 141 Dean Tullsen Memory Subsystem Design or Nothing Beats Cold, Hard Cache CSE 141 Dean Tullsen Memory Locality Memory hierarchies take advantage of memory locality . Memory locality is the principle that future memory accesses are near past accesses. Memories take advantage of two types of locality -- near in time => we will often access the same data again very soon -- near in space/distance => our next access is often very close to our last access (or recent accesses). (this sequence of addresses exhibits both temporal and spatial locality) 1,2,3,1,2,3,8,8,47,9,10,8,8. .. CSE 141 Dean Tullsen Locality and cacheing Memory hierarchies exploit locality by cacheing (keeping close to the processor) data likely to be used again. This is done because we can build large, slow memories and small, fast memories, but we can’t build large, fast memories. If it works, we get the illusion of SRAM access time with disk capacity SRAM access times are 2 - 25ns at cost of $100 to $250 per Mbyte. DRAM access times are 60-120ns at cost of $5 to $10 per Mbyte. Disk access times are 10 to 20 million ns at cost of $.10 to $.20 per Mbyte. CSE 141 Dean Tullsen A typical memory hierarchy CPU memory memory memory memory on-chip cache off-chip cache main memory disk small expensive $/bit cheap $/bit big so then where is my program and data?? fast slow
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
CSE 141 Dean Tullsen Cache Fundamentals cache hit -- an access where the data is found in the cache. cache miss -- an access which isn’t hit time -- time to access the cache miss penalty -- time to move data from further level to closer, then to cpu hit ratio -- percentage of time the data is found in the cache miss ratio -- (1 - hit ratio) cpu lowest-level cache next-level memory/cache CSE 141 Dean Tullsen Cache Fundamentals, cont. cache block size or cache line size – the amount of data that gets transferred on a cache miss. instruction cache -- cache that only holds instructions. data cache -- cache that only caches data. unified cache -- cache that holds both. cpu lowest-level cache next-level memory/cache CSE 141 Dean Tullsen A simple cache A cache that can put a line of data anywhere is called ______________ The most popular replacement strategy is LRU ( ). tag data the tag identifies the address of the cached data the tag identifies the address of the cached data 4 entries, each block holds one word, any block can hold any word. address string: 4 00000100 8 00001000 12 00001100 4 00000100 8 00001000 20 00010100 4 00000100 8 00001000 20 00010100 24 00011000 12 00001100 8 00001000 4 00000100 CSE 141 Dean Tullsen A simpler cache A cache that can put a line of data in exactly one place is called __________________. Advantages/disadvantages vs. fully-associative?
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 8

lect7 memory design - Memory Locality Memory hierarchies...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online