Cache memories have complex behaviours which can make

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ductor process technology which have been exploited to make microprocessors faster have been applied differently to improve memory chips. Standard DRAM parts have got a little faster, but mostly they have been developed to have a much higher capacity. Processor and memory speeds In 1980 a typical DRAM part could hold 4 Kbits of data, with 16 Kbit chips arriving in 1981 and 1982. These parts would cycle at 3 or 4 MHz for random accesses, and at about twice this rate for local accesses (in page mode). Microprocessors at that time could request around two million memory accesses per second. In 2000 DRAM parts have a capacity of 256 Mbits per chip, with random accesses operating at around 30 MHz. Microprocessors can request several hundred million memory accesses per second. If the processor is so much faster than the memory, it can only deliver its full performance potential with the help of a cache memory. A cache memory is a small, very fast memory that retains copies of recently used memory values. It operates transparently to the programmer, automatically deciding which values to keep and which to overwrite. These days it is usually implemented on the same chip as the processor. Caches work because programs normally display the property of locality, which means that at any particular time they tend to execute the same instructions many times (for instance in a loop) on the same areas of data (for instance a stack). Caches can be built in many ways. At the highest level a processor can have one of the following two organizations: A unified cache. This is a single cache for both instructions and data, as illustrated in Figure 10.1 on page 273. Separate instruction and data caches. This organization is sometimes called a modified Harvard architecture as shown in Figure 10.2 on page 274. Both these organizations have their merits. The unified cache automatically adjusts the proportion of the cache memory used by instructions according to the current program requirements, giving a better performance th...
View Full Document

This document was uploaded on 10/30/2011 for the course CSE 378 380 at SUNY Buffalo.

Ask a homework question - tutors are online