1282621567 - Computer System Organization Computer...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
Computer Organization CDA 3103 Dr. Hassan Foroosh Dept. of Computer Science UCF © Copyright Hassan Foroosh 2004 Computer System Organization Processor Computer Control Datapath Memory Devices Input Output Note: Lectures first introduce conceptual framework, then application. Textbook presents in the other order. Overview of Lecture ± Basics ± Why use a memory hierarchy? ± Memory technologies: SRAM and DRAM ± Principle of locality ± Caches ± Mapping, blocks, write policy, replacement, performance Application Beyond Memory Hierarchy ± Cache techniques are applied throughout computer systems ± Memory cache ± Virtual memory ± Translation Buffers ± Branch prediction ± Disk cache ± Distributed file system cache ± Browser cache ± Proxy cache ± So, even if you never design processor or memory system hardware, it is important to understand cache principles and techniques
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Memory Technologies ± Random access technologies ± Random => access time (approx) same for all locations ± SRAM: Static Random Access Memory ± 4 or 6 transistors per bit ± Fast, but low density, high power, expensive ± Static => internal feedback maintains data while power is on ± DRAM: Dynamic Random Access Memory ± 1 transistor per bit ± Inexpensive, high density, low power, but slow (2-100X SRAM) ± Dynamic => need to be “refreshed” regularly to maintain data Memory Technologies ± “Not-so-random” access technologies ± Access time varies by location at any time ± Common technologies mechanically move a magnetic or optical (not electronic) recording medium past sensors ± Examples: magnetic disk, magnetic tape, optical CD ± Higher density and lower cost than RAM, but slow (10 5 –10 8 X DRAM) ± Non-volatile => maintain data without power ± Typical hierarchy includes SRAM, DRAM, disk The Goal ± Illusion of large, fast, cheap memory ± Fact: Large memories are slow, fast memories are small ± How do we create a memory that is large, cheap and fast (most of the time)? ± Hierarchy ± Parallelism Who Cares About the Memory Hierarchy? μProc 60%/yr. (2X/1.5yr) DRAM 9%/yr. (2X/10 yrs) 1 10 100 1000 1980 1981 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 DRAM CPU 1982 Processor-Memory Performance Gap: (grows 50% / year) Performance Time “Moore’s Law” Processor-DRAM Memory Gap (latency)
Background image of page 2
Technology Trends Capacity Speed (latency) Logic 4x in 3 years 2x in 3 years DRAM 4x in 3 years 2x in 15 years Disk 4x in 3 years 2x in 10 years DRAM Year Size Cycle Time 1980 64 Kb 250 ns 1983 256 Kb 220 ns 1986 1 Mb 190 ns 1989 4 Mb 165 ns 1992 16 Mb 145 ns 1995 64 Mb 120 ns 1000X 2X The Big Picture: Memory as Hierarchy Control Datapath Memory Processor Memory Memory Memory Fastest Slowest Smallest Biggest Highest Lowest Speed: Size: Cost: Memory hierarchy usage is driven by differences in technology cost and performance and made effective by characteristics of program behavior.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 13

1282621567 - Computer System Organization Computer...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online