Authors
Matthew Lentz and Manoj Franklin, University of Maryland, USA
Abstract
Multicore processors have become ubiquitous, both in general-purpose and special-purpose applications. With the number of transistors in a chip continuing to increase, the number of cores in a processor is also expected to increase. Cache replacement policy is an important design parameter of a cache hierarchy. As most of the processor designs have become multicore, there is a need to study cache replacement policies for multi-core systems. Previous studies have focused on the shared levels of the multicore cache hierarchy. In this study, we focus on the top level of the hierarchy, which bears the brunt of the memory requests emanating from each processor core. We measure the miss rates of various cache replacement policies, as the number of cores is steadily increased from 1 to 16. The study was done by modifying the publicly available SESC simulator, which models in detail a multicore processor with a multi-level cache hierarchy. Our experimental results show that for the private L1 caches, the LRU (Least Recently Used) replacement policy outperforms all of the other replacement policies. This is in contrast to what was observed in previous studies for the shared L2 cache. The results presented in this paper are useful for hardware designers to optimize their cache designs or the program codes.
Keywords
Multicore, Cache memory, Replacement policies, & Performance evaluation