ARM.SoC.Architecture

There are more choices to make when the processor

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: zation of a cache are summarized in Table 10.1 on page 279. The first of these is the relationship between the cache and the memory management unit (MMU) which will be discussed further in 'Virtual and physical caches' on page 287; the others have been covered in this section. Cache design - an example 279 Table 10.1 Summary of cache organizational options. Organizational feature Cache-MMU relationship Cache contents Associativity Replacement strategy Write strategy Physical cache Unified instruction and data cache Direct-mapped RAM-RAM Round-robin Write-through Options Virtual cache Separate instruction and data caches Set-associative RAM-RAM Random Write-through with write buffer Fully associative CAM-RAM LRU Copy-back 10.4 Cache design - an example The choice of organization for a cache requires the consideration of several factors as discussed in Section 10.3 on page 272, including the size of the cache, the degree of associativity, the line and block sizes, the replacement algorithm, and the write strategy. Detailed architectural simulations are required to analyse the effects of these choices on the performance of the cache. The ARMS cache The ARMS, designed in 1989, was the first ARM chip to incorporate an on-chip cache, and detailed studies were carried out into the effects of these parameters on performance and bus use. These studies used specially designed hardware to capture address traces while running several benchmark programs on an ARM2; software was then used to analyse these traces to model the behaviour of the various organizations. (Today special hardware is generally unnecessary since desktop machines have sufficient performance to simulate large enough programs without hardware support.) The study started by setting an upper bound on the performance benefit that could be expected from a cache. A 'perfect' cache, which always contains the requested data, was modelled to set this bound. Any real cache is bound to miss some of the time, so it cannot perform any better than one which always hits. Three forms of perfect cache were modelled using realistic as...
View Full Document

Ask a homework question - tutors are online