10 - Hit_time

10 - Hit_time - Memory HierarchyReducing Hit Time, Review:...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
Memory Hierarchy Reducing Hit Time,
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache .
Background image of page 2
1. Hit Time Reduction Technique: Small and Simple Caches Why Alpha 21164 has 8KB Instruction and 8KB data cache + 96KB second level cache? Small data cache and clock rate Direct Mapped, on chip A time-consuming portion of a cache hit is using the index portion of the address to read the tag memory and then compare it to the address. Our guideline from It is also critical to keep the cache small enough to fit on the same chip as the processor to avoid the time penalty of going off-chip. A main benefit of direct-mapped caches is that the designer can overlap the tag check with the transmission of the data. For second level caches, some designs strike a compromise by keeping the tags on-chip and the data off-chip, promising a fast tag check, yet providing the greater capacity of separate memory chips.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 9

10 - Hit_time - Memory HierarchyReducing Hit Time, Review:...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online