L18 - CMU 18-447 S'08 L18-1 2008 J. C. Hoe 18-447 Lecture...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
CMU 18-447 S’08 L18-1 © 2008 J. C. Hoe 18-447 Lecture 18: Memory Hierarchy: Cache Design James C. Hoe Dept of ECE, CMU April 2, 2008 Announcements: It will all be over before you know it. Handouts: CMU 18-447 S’08 L18-2 © 2008 J. C. Hoe The problem (recap) ± Potentially M=2 m bytes of memory, how to keep the most frequently used ones in C bytes of fast storage where C << M ± Basic issues (intertwined) (1) where to “cache” a memory location? (2) how to find a cached memory location? (3) granularity of management: large, small, uniform? (4) when to bring a memory location into cache? (5) which cached memory location to evict to free-up space? ± Optimizations
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
CMU 18-447 S’08 L18-3 © 2008 J. C. Hoe Basic Operation (recap) hit? cache lookup return data data address choose location occupied? yes no no fetch new from L i+1 evict old to L i+1 yes update cache Ans to (4): memory location brought into cache “on-demand”. What about prefetch? (2) (1, 3, 5) (4) CMU 18-447 S’08 L18-4 © 2008 J. C. Hoe Direct-Mapped Cache (v1) Data Bank C/G lines by G bytes let t= lg 2 M lg 2 (C) tag idx g G bytes data = Tag Bank C/G lines by t bits hit? t bits t bits lg 2 (C/G) bits valid What about writes? M-bit address
Background image of page 2
CMU 18-447 S’08 L18-5 © 2008 J. C. Hoe Storage Overhead ± For each cache block of G bytes, must also store additional “ t+1 ”b its ± ± where ± t=lg 2 M lg 2 (C) ­ if M=2 32 , G=4, C=16K=2 14 t=18 bits for each 4-byte block 60% storage overhead 16KB cache really needs 25.5KB of SRAM ± Solution: let multiple G -byte words share a common tag ­ each B -byte block holds B/G words ­ if M=2 32 , B=16, G=4, C=16K t=18 bits for each 16-byte block 15% storage overhead 16KB cache needs 18.4KB of SRAM 15% of 16KB is small, 15% of 1MB is 152KB larger block size for lower/larger hierarchies CMU 18-447 S’08 L18-6 © 2008 J. C. Hoe Direct-Mapped Cache (final) Data Bank C/B -by- B bytes let t= lg 2 M lg 2 (C) tag idx bo g B bytes G bytes data = Tag Bank C/B -by- t bits hit? t bits t bits lg 2 (B/G) bits lg 2 (C/B)
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
CMU 18-447 S’08 L18-7 © 2008 J. C. Hoe Direct-Mapped Cache ± C bytes of storage divided into C/B blocks ± A block of memory is mapped to one particular cache block according the address’ block index field ± All addresses with the same block index field map to the same cache block ­ 2 t such addresses; can cache only one such block at a time ­ even if C > working set size, collision is possible
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 04/07/2008 for the course ECE 18447 taught by Professor Hoe during the Spring '08 term at Carnegie Mellon.

Page1 / 12

L18 - CMU 18-447 S'08 L18-1 2008 J. C. Hoe 18-447 Lecture...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online