EE357Unit14_Cache

EE357Unit14_Cache - 11/1/10 What is Cache Memory? EE 357...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
11/1/10 1 © Mark Redekopp, Al rights reserved EE 357 Unit 14 Cache Definitions Cache Address Mapping Cache Performance © Mark Redekopp, Al rights reserved What is Cache Memory? Main Memory Cache Memory Processor Core Usual chip boundary © Mark Redekopp, Al rights reserved Motivation for Cache Memory • Large memories are inherently slow – We need a large memory to hold code and data from multiple applications • Small memory is inherently faster – Important Fact: Processor is only accessing a small fraction of code and data in any short time period • Use both! – Large memory as a global store and cache as a smaller “working-set” store © Mark Redekopp, Al rights reserved Memory Hierarchy Memory hierarchy provides ability to access data quickly from lower levels while still providing large memory size Backing Store (Magnetic or FLASH memory) Main Memory (DRAM) L2 Cache (SRAM) L1 Cache (SRAM) Registers Size Speed Faster Larger Cost More Expensive “Lower” Levels “Higher” Levels
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
11/1/10 2 © Mark Redekopp, Al rights reserved Principle of Locality • 2 dimensions of this principle: space & time • Spatial Locality – Future accesses will likely cluster near current accesses – Instructions and data arrays are sequential (they are all one after the next) • Temporal Locality – Recent accesses will likely be accessed again soon – Same code and data are repeatedly accessed (loops, subroutines, etc.) – 90/10 rule: Analysis shows that usually 10% of the written instructions account for 90% of the executed instructions © Mark Redekopp, Al rights reserved Cache and Locality • Caches take advantage of locality • Spatial Locality – Caches do not store individual words but blocks of words (a.k.a. “cache line”) – Caches always bring in a group of sequential words because if we access one, we are likely to access the next – Bringing in blocks of sequential words takes advantage of memory architecture (i.e. FPM, SDRAM, etc.) • Temporal Locality – Leave data in the cache because it will likely be accessed again © Mark Redekopp, Al rights reserved Cache Blocks/Lines 0x400000 0x400040 0x400080 0x4000c0 128B Cache [4 blocks (lines) of 8-words (32-bytes)] Proc. Main Memory 0x400100 0x400140 Wide (multi-word) FSB Narrow (Word) Cache bus • Cache is broken into “blocks” or “lines” – Any time data is brought in, it will bring in the entire block of data – Blocks start on addresses multiples of their size – Usually block size matches burst size from main memory © Mark Redekopp, Al rights reserved Cache Blocks/Lines 0x400000 0x400040 0x400080 0x4000c0 Proc. 0x400100 0x400140 Whenever the processor generates a read or a write, it will first check the cache memory to see if it contains the desired data – If so, it can get the data quickly from cache – Otherwise, it must go to the slow main memory to get the data Request word @ 0x400028 1 Cache does not have the data and requests whole cache line 400020-40003f 2 3 Memory responds 4 Cache forward desired word
Background image of page 2
11/1/10 3 © Mark Redekopp, Al rights reserved Cache Definitions Cache Hit
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 04/03/2011 for the course EE 357 taught by Professor Mayeda during the Spring '08 term at USC.

Page1 / 31

EE357Unit14_Cache - 11/1/10 What is Cache Memory? EE 357...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online