Lecture_07_Review of Memory Hierarchy

Lecture_07_Review of Memory Hierarchy - 5008: Computer...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon
CA Lecture07 - memory hierarchy review(cwliu@twins.ee.nctu.edu.tw) 07-1 5008: Computer Architecture 5008: Computer 5008: Computer Architecture Architecture Appendix C Appendix C Review of Review of Memmory Memmory Hierarchy Hierarchy
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
1977: DRAM faster than microprocessors Apple ][ (1977) Steve Wozniak Steve Jobs CPU: 1000 ns DRAM: 400 ns
Background image of page 2
CA Lecture07 - memory hierarchy review(cwliu@twins.ee.nctu.edu.tw) 07-3 CPU vs. Memory Performance Trends Relative performance (vs. 1980 perf.) as a function of year +7%/year +55%/year +35%/year Performance gap between processor and memory
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
CA Lecture07 - memory hierarchy review(cwliu@twins.ee.nctu.edu.tw) 07-4 Why? Because of a fundamental constraint: The larger the memory, the higher the access latency. A characteristic of all present memory technologies. This will remain true in all future technologies! Quantum mechanics gives a minimum size for bits (Assuming energy density is limited.) Thus n bits require Ω (n) volume of space. At light speed, random access takes Ω (n 1/3 ) time! (Assuming a roughly flat region of space time.) Of course, specific memory technologies (or a suite of available technologies) may scale even worse than this! Ω ( n 1/3 ) (Beyond some point.)
Background image of page 4
CA Lecture07 - memory hierarchy review(cwliu@twins.ee.nctu.edu.tw) 07-5 What Programmers Want Programmers like to be insulated from physics It s easier to think about programming models if you don t have to worry about physical constraints. However, ignoring physics in algorithm design always sacrifices some runtime efficiency. But, programmer productivity is more important economically than performance (for now). Programmers want to pretend they have the following memory model: An unlimited number of memory locations, all accessible instantly!
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
CA Lecture07 - memory hierarchy review(cwliu@twins.ee.nctu.edu.tw) 07-6 What We Can Provide? A small number of memory locations, all accessible quickly ; and/or A large number of memory locations, all accessible more slowly ; and/or A memory hierarchy , Has both kinds Can automatically transfer data between them often (hopefully) before it s needed Approximates (gives the illusion of having): As many locations as the large memory has, All accessible almost as quickly as the small memory!
Background image of page 6
.. CPU 60% per yr 2X in 1.5 yrs DRAM 9% per yr 2X in 10 yrs 1 0 DRAM CPU Performance (1/latency) 9 8 2 Year Gap grew 50% per year Q. How do architects address this gap? A. Put smaller, faster “cache” memories between CPU and DRAM. Create a “memory hierarchy”.
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 8
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 08/23/2009 for the course IEE 5513 taught by Professor Cwliu during the Spring '09 term at National Chiao Tung University.

Page1 / 100

Lecture_07_Review of Memory Hierarchy - 5008: Computer...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online