This preview shows page 1. Sign up to view the full content.
Unformatted text preview: m sits in this single instruction loop until an interrupt causes it to do something else. Clearly the processor is doing no useful work while in this idle loop, so any power it uses is wasted. AMULET2 detects the opcode corresponding to a branch which loops to itself and uses this to stall a signal at one point in the asynchronous control network. This stall rapidly propagates throughout the control, bringing the processor 384 The AMULET Asynchronous ARM Processors to an inactive, zero power state. An active interrupt request releases the stall, enabling the processor to resume normal throughput immediately. AMULET2 can therefore switch between zero power and maximum throughput states at a very high rate and with no software overhead; indeed, much existing ARM code will give optimum power-efficiency using this scheme even though it was not written with the scheme in mind. This makes the processor very applicable to low-power applications with bursty real-time load characteristics. 14.4 AMULET2e
AMULET2e is an AMULET2 processor core (see Section 14.3 on page 381) combined with 4 Kbytes of memory, which can be configured either as a cache or a fixed RAM area, and a flexible memory interface (the funnel) which allows 8-, 16- or 32-bit external devices to be connected directly, including memories built from DRAM. The internal organization of AMULET2e is illustrated in Figure 14.7. AMULET2e cache The cache comprises four 1 Kbyte blocks, each of which is a fully associative random replacement store with a quad-word line and block size. A pipeline register between the CAM and the RAM sections allows a following access to begin its CAM look-up while the previous access completes within the RAM; this exploits Figure 14.7 AMULET2e internal organization. AMULET2e 385 the ability of the AMULET2 core to issue multiple memory requests before the data is returned from the first. Sequential accesses are detected and bypass the CAM look-up, thereby saving power and improving performance. Cache line fetches are non-blocking, accessing the addressed item first and then allowing the processor to continue while the rest of the line is fetched. The line fetch automat...
View Full Document
- Spring '09