Computer Organization and Design: The Hardware/Software Interface

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
CS152 Computer Architecture and Engineering Lecture 21 Caches (finished) Virtual Memory April 19, 2003 John Kubiatowicz (www.cs.berkeley.edu/~kubitron) lecture slides: http://inst.eecs.berkeley.edu/~cs152/ 4/19/04 ©UCB Spring 2004 CS152 / Kubiatowicz Lec21.2 Execution_Time = Instruction_Count x Cycle_Time x (ideal CPI + Memory_Stalls/Inst + Other_Stalls/Inst) Memory_Stalls/Inst = Instruction Miss Rate x Instruction Miss Penalty + Loads/Inst x Load Miss Rate x Load Miss Penalty + Stores/Inst x Store Miss Rate x Store Miss Penalty Average Memory Access time (AMAT) = Hit Time L1 + (Miss Rate L1 x Miss Penalty L1 ) = (Hit Rate L1 x Hit Time L1 ) + (Miss Rate L1 x Miss Time L1 ) Average Memory Access time = Hit Time + (Miss Rate x Miss Penalty) Recall: Cache Performance 4/19/04 ©UCB Spring 2004 CS152 / Kubiatowicz Lec21.3 ° Compulsory (cold start or process migration, first reference): first access to a block “Cold” fact of life: not a whole lot you can do about it Note: If you are going to run “billions” of instruction, Compulsory Misses are insignificant ° Capacity : Cache cannot contain all blocks access by the program Solution: increase cache size ° Conflict (collision): Multiple memory locations mapped to the same cache location Solution 1: increase cache size Solution 2: increase associativity ° Coherence (Invalidation): other process (e.g., I/O) updates memory Recall: A Summary on Sources of Cache Misses 4/19/04 ©UCB Spring 2004 CS152 / Kubiatowicz Lec21.4 Summary: Cache techniques ° Caches, TLBs, Virtual Memory all understood by examining how they deal with 4 questions: 1) Where can block be placed? 2) How is block found? 3) What block is replaced on miss? 4) How are writes handled? ° More cynical version of this: Everything in computer architecture is a cache! ° Techniques people use to improve the miss rate of caches: Technique MR MP HT Complexity Larger Block Size + 0 Higher Associativity + 1 Victim Caches + 2 Pseudo-Associative Caches + 2 HW Prefetching of Instr/Data + 2 Compiler Controlled Prefetching + 3 Compiler Reduce Misses + 0
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
4/19/04 ©UCB Spring 2004 CS152 / Kubiatowicz Lec21.5 To Next Lower Level In Hierarchy DATA TAGS One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator Recall: Reducing Misses via a “Victim Cache” ° How to combine fast hit time of direct mapped yet still avoid conflict misses? ° Add buffer to place data discarded from cache ° Jouppi [1990]: 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache ° Used in Alpha, HP machines 4/19/04 ©UCB Spring 2004 CS152 / Kubiatowicz Lec21.6 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache . Improving Cache Performance (Continued) 4/19/04 ©UCB Spring 2004 CS152 / Kubiatowicz Lec21.7 0. Reducing Penalty: Faster DRAM / Interface ° New DRAM Technologies RAMBUS - same initial latency, but much higher bandwidth Synchronous DRAM TMJ-RAM (Tunneling magnetic-junction RAM) from IBM??
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 01/29/2008 for the course CS 152 taught by Professor Kubiatowicz during the Spring '04 term at University of California, Berkeley.

Page1 / 10

Virtual Memory - Recall Cache Performance CS152 Computer Architecture and Engineering Lecture 21 Caches(finished Virtual Memory Execution_Time =

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online