This preview shows page 1. Sign up to view the full content.
Unformatted text preview: E
The ARM1020E CPU is based around the ARM10TDMI core described in Section 9.4 on page 263. It implements ARM architecture version v5TE which includes the Thumb instruction set and the signal processing instruction set extensions described in Section 8.9 on page 239. The organization of the ARM1020E is very similar to that of the ARM920T shown in Figure 12.11 on page 336, the differences being the size of the cache and the bus widths. The only difference of substance (at the level of detail shown in this figure) is the use of two AMBA AHB bus master request-grant handshakes for the instruction and data sides, whereas the ARM920T arbitrates these internally to form a single external request-grant interface. ARM1020E caches The ARM1020E incorporates a 32 Kbyte instruction cache and a 32 Kbyte data cache. Both caches are 64-way associative and use a segmented CAM-RAM structure. They have a 32-byte line size. Both caches use either a pseudo-random or a round-robin replacement algorithm and support lock-down. In order to satisfy the ARMlOTDMI's bandwidth requirements both caches have a 64-bit data bus; the instruction cache can supply two instructions per cycle, and the data cache can supply two words of data during each cycle of a load multiple or write two words of data during each cycle of a store multiple. The instruction cache is read-only. The data cache employs a copy-back write strategy and has one valid, one dirty and one write-back bit per line. When a line is replaced in the cache it may result in zero or eight words being written back to main memory, depending on the state of the dirty bit. The full eight-word cast-out process can be implemented in four memory cycles due to the 64-bit memory data bus. The write-back bit duplicates information usually found in the translation system, and enables the cache to implement a write operation as write-through or copy-back without reference to the MMU. The ARM1020E also supports 'hit-under-miss' operation. This means that if one data reference causes a cache...
View Full Document
- Spring '09