ridm@nrct.go.th   ระบบคลังข้อมูลงานวิจัยไทย   รายการโปรดที่คุณเลือกไว้

Energy efficient cache architectures for single, multi and many core processors

หน่วยงาน Edinburgh Research Archive, United Kingdom

รายละเอียด

ชื่อเรื่อง : Energy efficient cache architectures for single, multi and many core processors
นักวิจัย : Thucanakkenpalayam Sundararajan, Karthik , Sundararajan, Karthik T.
คำค้น : cache architecture , energy efficiency , Last Level Cache , LLC
หน่วยงาน : Edinburgh Research Archive, United Kingdom
ผู้ร่วมงาน : Topham, Nigel
ปีพิมพ์ : 2556
อ้างอิง : http://hdl.handle.net/1842/9916
ที่มา : -
ความเชี่ยวชาญ : -
ความสัมพันธ์ : Karthik T. Sundararajan, Timothy M. Jones and Nigel P. Topham. A Reconfig- urable Cache Architecture for Energy Efficiency. (Poster paper). In Proceedings of the 8th ACM International Conference on Computing Frontiers (CF’11), Is- chia, Italy, May 3-5, 2011. , Karthik T. Sundararajan, Timothy M. Jones and Nigel P. Topham. Smart Cache: A Self Adaptive Cache Architecture for Energy Efficiency. In Proceedings of the 11th IEEE International Conference on Embedded Computer Systems: Archi- tectures, Modeling, and Simulation (SAMOS’11), Samos, Greece, July 19-22, 2011. , Karthik T. Sundararajan, Vasileios Porpodas, Timothy M. Jones, Nigel P. Topham and Bjorn Franke. Cooperative Partitioning: Energy-Efficient Cache Partition- ing for High-Performance CMP’s. In Proceedings of the 18th IEEE International Symposium on High Performance Computer Architecture (HPCA’12), New Or- leans, Louisiana, February 25-29, 2012. , Karthik T. Sundararajan, Timothy M. Jones and Nigel P. Topham. Energy- Efficient Cache Partitioning For Future CMPs. (Poster paper). In Proceedings of the 21st ACM International Conference on Parallel Architectures and Compi- lation Techniques (PACT’12), Minneapolis, Minnesota, September 19-23, 2012. , Karthik T. Sundararajan, Timothy M. Jones and Nigel P. Topham. The Smart Cache: An Energy-Efficient Cache Architecture Through Dynamic Cache Adap- tation. In the International Journal of Parallel Programming(IJPP’13): Volume 41, Issue 2 (April 2013), Page 305-330.
ขอบเขตของเนื้อหา : -
บทคัดย่อ/คำอธิบาย :

With each technology generation we get more transistors per chip. Whilst processor frequencies have increased over the past few decades, memory speeds have not kept pace. Therefore, more and more transistors are devoted to on-chip caches to reduce latency to data and help achieve high performance. On-chip caches consume a significant fraction of the processor energy budget but need to deliver high performance. Therefore cache resources should be optimized to meet the requirements of the running applications. Fixed configuration caches are designed to deliver low average memory access times across a wide range of potential applications. However, this can lead to excessive energy consumption for applications that do not require the full capacity or associativity of the cache at all times. Furthermore, in systems where the clock period is constrained by the access times of level-1 caches, the clock frequency for all applications is effectively limited by the cache requirements of the most demanding phase within the most demanding application. This motivates the need for dynamic adaptation of cache configurations in order to optimize performance while minimizing energy consumption, on a per-application basis. First, this thesis proposes an energy-efficient cache architecture for a single core system, along with a run-time support framework for dynamic adaptation of cache size and associativity through the use of machine learning. The machine learning model, which is trained offline, profiles the application’s cache usage and then reconfigures the cache according to the program’s requirement. The proposed cache architecture has, on average, 18% better energy-delay product than the prior state-of-the-art cache architectures proposed in the literature. Next, this thesis proposes cooperative partitioning, an energy-efficient cache partitioning scheme for multi-core systems that share the Last Level Cache (LLC), with a core to LLC cache way ratio of 1:4. The proposed cache partitioning scheme uses small auxiliary tags to capture each core’s cache requirements, and partitions the LLC according to the individual cores cache requirement. The proposed partitioning uses a way-aligned scheme that helps in the reduction of both dynamic and static energy. This scheme, on an average offers 70% and 30% reduction in dynamic and static energy respectively, while maintaining high performance on par with state-of-the-art cache partitioning schemes. Finally, when Last Level Cache (LLC) ways are equal to or less than the number of cores present in many-core systems, cooperative partitioning cannot be used for partitioning the LLC. This thesis proposes a region aware cache partitioning scheme as an energy-efficient approach for many core systems that share the LLC, with a core to LLC way ratio of 1:2 and 1:1. The proposed partitioning, on an average offers 68% and 33% reduction in dynamic and static energy respectively, while again maintaining high performance on par with state-of-the-art LLC cache management techniques.

บรรณานุกรม :
Thucanakkenpalayam Sundararajan, Karthik , Sundararajan, Karthik T. . (2556). Energy efficient cache architectures for single, multi and many core processors.
    กรุงเทพมหานคร : Edinburgh Research Archive, United Kingdom .
Thucanakkenpalayam Sundararajan, Karthik , Sundararajan, Karthik T. . 2556. "Energy efficient cache architectures for single, multi and many core processors".
    กรุงเทพมหานคร : Edinburgh Research Archive, United Kingdom .
Thucanakkenpalayam Sundararajan, Karthik , Sundararajan, Karthik T. . "Energy efficient cache architectures for single, multi and many core processors."
    กรุงเทพมหานคร : Edinburgh Research Archive, United Kingdom , 2556. Print.
Thucanakkenpalayam Sundararajan, Karthik , Sundararajan, Karthik T. . Energy efficient cache architectures for single, multi and many core processors. กรุงเทพมหานคร : Edinburgh Research Archive, United Kingdom ; 2556.