Chapter09Rev17 - Chapter 9 (Revision number 17) Memory...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
1 Chapter 9 (Revision number 17) Memory Hierarchy What is memory hierarchy? So far, we have treated the physical memory as a black box. In the implementation of LC-2200 (Chapter 3 and Chapter 6), we treated memory as part of the datapath. There is an implicit assumption in that arrangement, namely, accessing the memory takes the same amount of time as performing any other datapath operation. Let us dig a little deeper into that assumption. Today processor clock speeds have reached the GHz range. This means that the CPU clock cycle time is less than a nanosecond! Let us compare that to the state-of-the-art memory speeds. Physical memory, implemented using dynamic random access memory (DRAM) technology, has a cycle time in the range of 100 nanoseconds. We know that the slowest member determines the cycle time of the processor in a pipelined processor implementation. Given that IF and MEM stages of the pipeline access memory, how can we bridge the 100:1 speed disparity that exists between the CPU and the memory? Let us re-visit the processor datapath of LC-2200. It contains a register file, which is also a kind of memory. The access time of a small 16-element register file is at the speed of the other datapath elements. The register-file uses flip-flops for its implementation that have fast switching speeds. Therefore, if we wanted the memory to be comparable in speed to the other datapath elements, we simply have to implement it with the same technology as the register-file. What is the catch here? Memories built out of flip-flops are bulky and consume a considerable amount of power. The virtue of this technology, referred to as static random access memory (SRAM) , is speed. However, it does not lend itself to realizing large memory systems due to its exorbitant cost. On the other hand, DRAM technology is much more frugal with respect to power consumption compared to its SRAM counterpart and lends itself to very large scale integration. Today a single DRAM chip may contain up to 256 Mbits with an access time of 70 ns. The virtue of the DRAM technology is the size. Thus, it is economically feasible to realize large memory systems using DRAM technology. 9.1 Cache organization The choice is clear. It is feasible to have a small amount of fast memory and/or a large amount of slow memory. Ideally, we would like to have the size advantage of DRAM and the speed advantage of SRAM. How can we get both? This is where memory hierarchy comes into play. Figure 9.1 shows the basic idea behind memory hierarchy. Main Memory is the physical memory that is visible to the instruction-set of the computer. Cache , as the name suggests, is a hidden storage. What does it contain? The idea is to stash information brought from the memory in the cache. It is implemented using SRAM technology and hence much faster than the main memory which is implemented using DRAM technology. As a corollary, the size of the cache is smaller when compared to the main memory.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 Our intent is as follows:
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 11/25/2010 for the course CENG 100 taught by Professor Ceng during the Spring '10 term at Universidad Europea de Madrid.

Page1 / 43

Chapter09Rev17 - Chapter 9 (Revision number 17) Memory...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online