cache1 - Who Cares about Memory Hierarchy Memory Memory...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
emory Subsystem Design Memory Subsystem Design or Nothing Beats Cold, Hard Cache CSE 240A Dean Tullsen ho Cares about Memory Hierarchy? Who Cares about Memory Hierarchy? Processor Only Thus Far in Course CPU-DRAM Gap 1980: no cache in μproc; 995 2 vel cache 60% trans on Alpha 21164 μproc CSE 240A Dean Tullsen 1995 2-level cache, 60% trans. on Alpha 21164 μproc Memory Cache Can put small, fast emory close to memory close to processor. What do we put there? cpu cache memory CSE 240A Dean Tullsen Memory Locality Memory hierarchies take advantage of memory locality . Memory locality is the principle that future memory accesses are near past accesses. emory hierarchies take advantage of two types of Memory hierarchies take advantage of two types of locality Temporal locality -- near in time => we will often access the same data again very soon Spatial locality -- near in space/distance => our next access is often very close to our last access (or recent accesses). 1,2,3,1,2,3,8,8,47,9,10,8,8. .. CSE 240A Dean Tullsen
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Locality and cacheing Memory hierarchies exploit locality by cacheing (keeping close to the processor) data likely to be used again. This is done because we can build large, slow memories and small, fast memories, but we can’t build large, fast memories. If it works, we get the illusion of SRAM access time with disk capacity SRAM (static RAM) -- 5-20 ns access time, very expensive (onchip faster) DRAM (dynamic RAM) -- 60-100 ns, cheaper disk -- access time measured in milliseconds, very cheap CSE 240A Dean Tullsen A typical memory hierarchy CPU ll i $/bit memory memory on-chip cache (s) off-chip cache small expensive $/bit memory main memory memory disk eap $/bit big CSE 240A Dean Tullsen cheap $/bit so then where is my program and data?? Cache Fundamentals cache hit -- an access where the data cpu highest-level is found in the cache. cache miss -- an access which isn’t it ti t th hi h h cache wer vel hit time -- time to access the higher cache miss penalty -- time to move data from wer level to upper then to cpu lower-level memory/cache lower level to upper, then to cpu hit ratio -- percentage of time the data is found in the higher cache g
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 01/21/2010 for the course CSE240A 662015 taught by Professor Tullsen,deanmichael during the Fall '09 term at UCSD.

Page1 / 7

cache1 - Who Cares about Memory Hierarchy Memory Memory...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online