Lecture_17 - Chapter 5 Large and Fast: Exploiting Memory...

Info iconThis preview shows pages 1–9. Sign up to view the full content.

View Full Document Right Arrow Icon
Chapter 5 Large and Fast: Exploiting Memory Hierarchy
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 2 Memory Technology Static RAM (SRAM) 0.5ns – 2.5ns, $2000 – $5000 per GB Dynamic RAM (DRAM) 50ns – 70ns, $20 – $75 per GB Magnetic disk 5ms – 20ms, $0.20 – $2 per GB Ideal memory Access time of SRAM Capacity and cost/GB of disk §5.1 Introduction
Background image of page 2
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 3 Principle of Locality Programs access a small proportion of their address space at any time Temporal locality Items accessed recently are likely to be accessed again soon e.g., instructions in a loop, induction variables Spatial locality Items near those accessed recently are likely to be accessed soon E.g., sequential instruction access, array data
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 4 Taking Advantage of Locality Memory hierarchy Store everything on disk Copy recently accessed (and nearby) items from disk to smaller DRAM memory Main memory Copy more recently accessed (and nearby) items from DRAM to smaller SRAM memory Cache memory attached to CPU
Background image of page 4
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 5 Memory Hierarchy Levels Block (aka line): unit of copying May be multiple words If accessed data is present in upper level Hit: access satisfied by upper level Hit ratio: hits/accesses If accessed data is absent Miss: block copied from lower level Time taken: miss penalty Miss ratio: misses/accesses = 1 – hit ratio Then accessed data supplied from lower level
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 6 Cache Memory Cache memory The level of the memory hierarchy closest to the CPU Given accesses X 1 , …, X n–1 , X n §5.2 The Basics of Caches How do we know if the data is present? Where do we look?
Background image of page 6
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 7 Direct Mapped Cache Location determined by address Direct mapped: only one choice (Block address) modulo (#Blocks in cache) #Blocks is a power of 2 Use low-order address bits
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 8 Tags and Valid Bits How do we know which particular block is stored in a cache location? Store block address as well as the data
Background image of page 8
Image of page 9
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 31

Lecture_17 - Chapter 5 Large and Fast: Exploiting Memory...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online