This preview shows page 1. Sign up to view the full content.
Unformatted text preview: nit inst TLB data TLB L1 d-cache instruction fetch unit L1 i-cache 32 bit address space 4 KB pagesize L1, L2, and TLB • 4-way set associative inst TLB • 32 entries, 8 sets data TLB • 64 entries, 16 sets L1 i-cache and d-cache • 16 KB, 128 sets • 32 B block size L2 cache • unified • 128 KB -- 2 MB • 32 B block size processor package Figure 10.22: The Pentium memory system. from memory.
Aside: Optimizing address translation. In our discussion of address translation, we have described a sequential two-step process where the MMU (1) translates the virtual address to a physical address, and then (2) passes the physical address to the L1 cache. However, real hardware implementations use a neat trick that allows these steps to be partially overlapped, thus speeding up accesses to the L1 cache. For example, a virtual address on a Pentium with 4-KB pages has 12 bits of VPO, and these bits are identical to the 12 bits of PPO in the corresponding physical address. Since the four-way set-associative physical...
View Full Document
This note was uploaded on 09/02/2010 for the course ELECTRICAL 360 taught by Professor Schultz during the Spring '10 term at BYU.
- Spring '10
- The American