ARM.SoC.Architecture

When implemented with the paged memory management

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: fore the data itself is accessed. This overhead is usually avoided by implementing a translation look-aside buffer (TLB), which is a cache of recently used page translations. As with instruction and data caches (described in Section 10.3 on page 272), there are organizational options relating to the degree of associativity and the replacement strategy. The line and block sizes usually equate to a single page table entry, and the size of a typical TLB is much smaller than a data cache at around 64 entries. The locality properties of typical programs enable a TLB of this size to achieve a miss rate of a per cent or so. The misses incur the table-walking overhead of two additional memory accesses. The operation of a TLB is illustrated in Figure 10.12 on page 288. When a system incorporates both an MMU and a cache, the cache may operate either with virtual (pre-MMU) or physical (post-MMU) addresses. A virtual cache has the advantage that the cache access may start immediately the processor produces an address, and, indeed, there is no need to activate the MMU if Translation look-aside buffers Virtual and physical caches 288 Memory Hierarchy 31 Figure 10.12 1211 0 The operation of a translation look-aside buffer. the data is found in the cache. The drawback is that the cache may contain synonyms, which are duplicate copies of the same main memory data item in the cache. Synonyms arise because address translation mechanisms generally allow overlapping translations. If the processor modifies the data item through one address route it is not possible for the cache to update the second copy, leading to inconsistency in the cache. A physical cache avoids the synonym problem since physical memory addresses are associated with unique data items. However the MMU must now be activated on every cache access, and with some MMU and cache organizations the address translation must be completed by the MMU before the cache access can begin, leading to much longer cache latencies. A physical cache arrangement that neatly avoids the sequential access cost exploit...
View Full Document

Ask a homework question - tutors are online