{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}


L16-4up - Cache Issues Why the book Lecture notes are...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
L16 – Cache Issues 1 6.004 – Fall 2010 11/2/10 Cache Issues Cache miss Why the book? Lecture notes are faster! Quiz #3 on Friday L16 – Cache Issues 2 6.004 – Fall 2010 11/2/10 General Cache Principle SETUP: Requestor making a stream of lookup requests to a data store. Some observed predictability – e.g. locality – in reference pa erns. requestor Data store t M Access time: t M tag data requestor Data store t M Average access time: t C + (1- ! )t M Cache t C << t M tag data tag data This request is made only when tag can’t be found in cache, i.e., with probability 1- ! , where ! is the probability that the cache has the data (a “hit”) TRICK: Small, fast cache memory between requestor & data store Cache holds <tag, value> pairs most likely to be requested L16 – Cache Issues 3 6.004 – Fall 2010 11/2/10 Basic Cache Algorithm ON REFERENCE TO Mem[X]: Look for X among cache tags... HIT: X = TAG(i) , for some cache line i READ: return DATA(i) WRITE: change DATA(i); Start Write to Mem(X) MISS: X not found in TAG of any cache line REPLACEMENT SELECTION: ! Select some line k to hold Mem[X] (Allocation) READ: Read Mem[X] Set TAG(k)=X, DATA(K)=Mem[X] WRITE: Start Write to Mem(X) Set TAG(k)=X, DATA(K)= new Mem[X] MAIN MEMORY CPU (1 ! ) Tag Data A B Mem[A] Mem[B] L16 – Cache Issues 4 6.004 – Fall 2010 11/2/10 Cache Design Issues Associativity – a basic tradeo ff between Parallel Searching (expensive) vs Constraints on which addresses can be stored where Block Size: Amortizing cost of tag over multiple words of data Replacement Strategy: OK, we’ve missed. Go a add this new address/value pair to the cache. What do we kick out? ! Least Recently Used: discard the one we haven’t used the longest. ! Plausible alternatives, (e.g. random replacement. Write Strategy: When do we write cache contents to main memory?
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
L16 – Cache Issues 5 6.004 – Fall 2010 11/2/10 Fully Associative Cache TAG Data = ? TAG Data = ? TAG Data = ? Address from CPU Miss Data to CPU 0 1 Data from memory READ HIT: One of the cache tags matches the incoming address; the data associated with that tag is returned to CPU. Update usage info. READ MISS: None of the cache tags matched, so initiate access to main memory and stall CPU until complete. Update LRU cache entry with address/data. One cache “line” 32 bits 30 bits Issue: COST!!! L16 – Cache Issues 6 6.004 – Fall 2010 11/2/10 Direct Mapped Cache Low-cost extreme: Single comparator Use ordinary (fast) static RAM for cache tags & data: Incoming Address K T =? HIT Data Out DISADVANTAGE: COLLISIONS QUESTION: Why not use HIGH-order bits as Cache Index? K-bit Cache Index D-bit data word T Upper-address bits Tag Data K x (T + D)-bit static RAM L16 – Cache Issues 7 6.004 – Fall 2010 11/2/10 Loop B: Pgm at 1024 , data at 2048 : … but not here!
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}