lec20 - LECTURE - 20 LECTURE - 20 Topic for Today Reducing...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: LECTURE - 20 LECTURE - 20 Topic for Today Reducing Cache Miss Penalty Scribe? Technique-1: Prioritize Read Misses over Writes Write-through cache ==> write-buffer Beware of consistency Write-back cache: write-back dirty block after processing read miss Example: store x, load y, load x – x and y in the same block Possible solution: wait for write-buffer to clear before processing any read miss Better (but more complex) solution: check write buffer, and process read miss first Technique-2: Sub-Block Placement Sub-block: units smaller than the full block Valid bits added to sub-blocks Only a sub-block read on cache miss 100 1 1 200 1 1 1 164 1 270 1 1 Tag Valid bits How is this different from just using a smaller block size? Tag length is reduced (good for on-chip cache) Technique-3: Restart CPU ASAP Early restart: CPU can proceed as soon as the requested word is loaded onto cache Critical word first: The requested word is fetched first A.k.a wrapped fetch, or requested word first These are good for caches with large blocks...
View Full Document

Page1 / 11

lec20 - LECTURE - 20 LECTURE - 20 Topic for Today Reducing...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online