ARM.SoC.Architecture

Cache performance metrics since the processor can

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: contrasted with those of more complex organizations: A particular memory item is stored in a unique location in the cache; two items with the same cache address field will contend for use of that location. Only those bits of the address that are not used to select within the line or to address the cache RAM need be stored in the tag field. The tag and data access can be performed at the same time, giving the fastest cache access time of any organization. Since the tag RAM is typically a lot smaller than the data RAM, its access time is shorter, allowing the tag comparison to be completed within the data access time. Caches 275 Figure 10.3 Direct-mapped cache organization. A typical direct-mapped cache might store 8 Kbytes of data in 16-byte lines. There would therefore be 512 lines. A 32-bit address would have four bits to address bytes within the line and nine bits to select the line, leaving a 19-bit tag which requires just over one Kbyte of tag store. When data is loaded into the cache, a block of data is fetched from memory. There is little point in having the line size smaller than the block size. If the block size is smaller than the line size, the tag store must be extended to include a valid bit for each block within the line. Choosing the line and block sizes to be equal results in the simplest organization. The set-associ ative cache Moving up in complexity, the set-associative cache aims to reduce the problems due to contention by enabling a particular memory item to be stored in more than one cache location. A 2-way set-associative cache is illustrated in Figure 10.4 on page 276. As the figure suggests, this form of cache is effectively two direct-mapped caches operating in parallel. An address presented to the cache may find its data in either half, so each memory address may be stored in either of two places. Each of two items which were in contention for a single location in the direct-mapped cache may now occupy one of these places, allowing the cache to...
View Full Document

This document was uploaded on 10/30/2011 for the course CSE 378 380 at SUNY Buffalo.

Ask a homework question - tutors are online