This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: EEL 5764 Project, Fall 2010 1 Abstract —The commonly used LRU replacement policy is susceptible to thrashing for memory-intensive workloads that have a working set greater than the available cache size. For such applications, the majority of lines traverse from the MRU position to the LRU position without receiving any cache hits, resulting in inefficient use of cache space. Cache performance can be improved if some fraction of the working set is retained in the cache so that at least that fraction of the working set can contribute to cache hits. For such applications, Qureshi and Patt showed that simple changes to the insertion policy can significantly reduce cache misses. The authors proposed the LRU Insertion Policy (LIP) which places the incoming line in the LRU position instead of the MRU position. LIP protects the cache from thrashing and results in close to optimal hit rate for applications that have a cyclic reference pattern. They also proposed the Bimodal Insertion Policy (BIP) as an enhancement of LIP that adapts to changes in the working set while maintaining the thrashing protection of LIP. They finally proposed a Dynamic Insertion Policy (DIP) to choose between BIP and the traditional LRU policy depending on which policy incurs fewer misses. The proposed insertion policies do not require any change to the existing cache structure, are trivial to implement, and have a storage requirement of less than two bytes. Keywords—Insertion, Replacement, Thrashing, Set Dueling. I. INTRODUCTION One of the major limiters of computer system performance has been the access to main memory, which is typically two orders of magnitude slower than the processor. To bridge this gap, modern processors already devote more than half the on- chip transistors to the last-level cache. Although direct- mapped caches suffer from higher miss ratios as compared to set-associative caches, they are attractive for today's high- speed pipelined processors that require very low access times. Thrashing is a situation where large amounts of computer resources are used to do a minimal amount of work, with the system in a continual state of resource contention. Cache Manuscript submitted November 27, 2010. This work is about simulation and verification of the results presented in the paper “Adaptive Insertion Policies for High Performance Caching” by Qureshi, Patt et al. published in ISCA 2007. Pramod Busam and Sravani Konda are with the Department of Electrical Engineering, University of Florida, Ganinesville, FL 32608, USA. (e-mail: [email protected], [email protected]). thrashing, where main memory is accessed in a pattern that leads to multiple main memory locations competing for the same cache lines, results in excessive cache misses and accesses to main memory. Serious system performance degradation occurs because the system is spending a disproportionate amount of time just accessing the data from memory. Thrashing is caused by programs or workloads that present insufficient locality of reference: if the working set of a...
View Full Document
This note was uploaded on 02/13/2011 for the course EE 5764 taught by Professor Dr.yangli during the Fall '10 term at SUNY Buffalo.
- Fall '10