370L23 - EECS/CS 370 Memory Systems Lecture 23 Seven...

Info iconThis preview shows pages 1–10. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: EECS/CS 370 Memory Systems Lecture 23 Seven lectures on memory 1. Introduction to the memory systems 2. Basic cache design 1. Tags, blocks, hits and misses 2. Block replacement and the principle of locality 3. Write-back and Write-through caches 4. Associativity 5. Cache interactions 6. Virtual Memory 7. Making VM faster: TLBs Basic Cache design Cache memory can copy data from any part of main memory It has 2 parts: The TAG (CAM) holds the memory address The BLOCK (SRAM) holds the memory data Accessing the cache: Compare the reference address with the tag If they match, get the data from the cache block If the dont match, get the data from main memory Cache organization A cache memory consists of multiple tag/block pairs (called cache lines ) Searches can be done in parallel (within reason) At most one tag will match If there is a tag match, it is a cache HIT If there is no tag match, it is a cache MISS Our goal is to keep the data we think will be accessed in the near future in the cache Cache operation Every cache miss will get the data from memory and ALLOCATE a cache line to put the data in. Just like any CAM write Which line should be allocated? Random? OK, but hard to grade test questions Better than random? How? Picking the most likely addresses What is the probability of accessing a random memory location? With no information, it is just as likely as any other address But programs are not random They tend to use the same memory locations over and over. We can use this to pick the most referenced locations to put into the cache Temporal Locality The principle of temporal locality in program references says that if you access a memory location (e.g., 1000) you will be more likely to re-access that location than you will be to reference some other random location. Using locality in the cache Temporal locality says any miss data should be placed into the cache It is the most recent reference location Temporal locality says that the least recently referenced (or least recently used LRU ) cache line should be evicted to make room for the new line. Because the re-access probability falls over time as a cache line isnt referenced, the LRU line is least likely to be re-referenced. A very simple memory system 74 110 120 130 140 150 160 170 180 190 200 210 220 230 240 250 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Ld R1 M[ 1 ] Ld R2 M[ 5 ] Ld R3 M[ 1 ] Ld R3 M[ 7 ] Ld R2 M[ 7 ] Cache Processor tag data R0 R1 R2 R3 Memory 2 cache lines 4 bit tag field 1 byte block V V A very simple memory system...
View Full Document

Page1 / 36

370L23 - EECS/CS 370 Memory Systems Lecture 23 Seven...

This preview shows document pages 1 - 10. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online