Unit15-Cache-EE357-Nazarian-Fall09

Unit15-Cache-EE357-Nazarian-Fall09 - University of Southern...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: University of Southern California University of Southern California Viterbi School of Engineering Viterbi School of Engineering EE357 asic Organization of Computer Systems Basic Organization of Computer Systems Cache: efinitions Definitions Address Mapping Performance References: 1) Textbook Shahin Nazarian Fall 2009 2) Mark Redekopps slide series What is Cache Memory? Cache memory is a small, fast memory used to hold copies copies of data that the processor will likely need to access in the near future Cache sits between the processor and main emory (MM) and is usually built onto same memory (MM) and is usually built onto same chip as processor ead and write requests will hopefully be Read and write requests will hopefully be satisfied by the fast cache rather than the slow main memory Main emory Cache emory Processor ore Shahin Nazarian/EE357/Fall 2009 Memory Memory Core Usual chip boundary 3 Motivation for Cache Memory Large memories are inherently slow We need a large memory to hold code and data from multiple applications Small memory is inherently faster Important Fact: Processor is only accessing a small fraction of code and data in any short me period time period Use both! Large memory as a global store and cache as a smaller working-set store Shahin Nazarian/EE357/Fall 2009 4 Memory Hierarchy Memory hierarchy provides ability to access data quickly from lower levels while still providing large emory size memory size Size Speed Cost Lower Levels 1 Cache Registers Faster Expensive eve s L2 Cache RAM) L1 Cache (SRAM) More E acking Store Main Memory (DRAM) (SRAM) Larger Higher Shahin Nazarian/EE357/Fall 2009 Backing Store (Magnetic or FLASH memory) Higher Levels 5 Principle of Locality 2 dimensions of this principle: space & time patial Locality uture accesses will likely Spatial Locality Future accesses will likely cluster near current accesses Instructions and data arrays are sequential (they are all one after the next) Temporal Locality Recent accesses will likely be accessed again soon Same code and data are repeatedly accessed oops, subroutines, etc.) (loops, subroutines, etc.) 90/10 rule : Analysis shows that usually 10% of the written instructions account for 90% of the xecuted instructions Shahin Nazarian/EE357/Fall 2009 executed instructions 6 Cache and Locality Caches take advantage of locality Spatial Locality Caches do not store individual words but blocks of words (a.k.a. cache line) aches always bring in a group of sequential Caches always bring in a group of sequential words because if we access one, we are likely to access the next ringing in blocks of sequential words takes Bringing in blocks of sequential words takes advantage of memory architecture (i.e. FPM, SDRAM, etc.) emporal Locality Temporal Locality Leave data in the cache because it will likely be accessed again Shahin Nazarian/EE357/Fall 2009 7 Cache Blocks (Lines)...
View Full Document

Page1 / 117

Unit15-Cache-EE357-Nazarian-Fall09 - University of Southern...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online