Lecture11 - CSCE 2610 Exploiting Memory Hierarchy:...

Info iconThis preview shows pages 1–12. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CSCE 2610 Exploiting Memory Hierarchy: Memory/Caching Bottleneck In our previous discussion on the pipeline, we assumed that memory access could be handled in a single cycle. Overly simplistic. As it turns out, memory access is THE bottleneck of modern computing. Gap Memory has not kept up with the gains in processor speed, so we have to employ some tricks. Memory Hierarchy The various memory technologies can be visualized as a pyramid: Components at the top are FAST, but small and expensive. Components at the bottom are SLOW, but large and cheap. Memory Technologies Memory Technology Typical access time Clock cycles (5GHz cycle) $ per GB in 2008 Registers 0.1ns 0.5 ??? SRAM 0.5-2.5ns 2-12 $2000-$5000 DRAM 50-70ns 250-350 $20-$75 Magnetic disk 5-20ms 25M-100M $0.20-$2 What we'd like is FAST access (like SRAM) with large, cheap storage (like a hard-disk). We can fake it using a combination of different technologies External Storage In the context of virtual memory (more later), we'll see that the hard-drive can be used in place of memory. A system's available memory is basically limitless. VERY, VERY slow Hard drives are now being made (inexpensively) in the terabyte range. Main Memory (DRAM) Memory (non-cached) typically uses Dynamic Random-Access Memory or DRAM. DRAM stores it's data in a capacitor which is accessed by a single transistor i.e. 1-transistor per bit (cheap per bit) Capacitors lose charge over time, so must be periodically refreshed read, write-back. Main Memory (DRAM) DRAM fetches an entire row (e.g. 2KB), then selects which column to access. This takes two steps (row-access, column- access). DRAM improvements The rate of improvement has slowed in recent years. Cache Memory (SRAM) SRAM uses a single access step (quicker), but requires more transistors per bit (more expensive). The image below shows a 2M x 16 SRAM with a 21-bit address 2M entries, each 16 bits wide. Multiplexors? A 2-million choice multiplexor is a bit much. Instead, the correct line is selected using tri-state logic....
View Full Document

This note was uploaded on 02/28/2012 for the course CSCE 3510 taught by Professor Unt during the Spring '12 term at North Texas.

Page1 / 39

Lecture11 - CSCE 2610 Exploiting Memory Hierarchy:...

This preview shows document pages 1 - 12. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online