SlideShare a Scribd company logo
An Application Classification Guided Cache
Tuning Heuristic for Multi-core Architectures
PRESENTED BY:- GUIDED BY:-
DEBABRATA PAUL CHOWDHURY(14014081002) PRO F. PRASHANT MODI
KHYATI RAJPUT (14014081007) (UVPCE)
M.TECH-CE(SEM -II)
Contents
• Introduction
• Multi core System Optimization
• Cache Tuning
• Cache Tuning Process
• Multi-core Architectural Layout
• Application Classification Guided Cache Tuning Heuristic
• Experimental Work
• Conclusion
Introduction
Basic Concepts
• Single Core :- In single core architecture, computing component having one
independent processing unit.
Introduction(cont.)
• Multi Core:-
• In multi core architecture single computing component with two or more independent
actual processing units (called "cores").
• Run multiple instructions of a program at the same time, increasing overall speed for
programs –”Parallel Computing”.
Muti-Core System Optimization
•Previous multi-core cache optimizations only focused on improving performance(such
as number of Hits, misses, and write backs).
•But now multi-core optimizations focused on reducing energy consumption via tuning
individual cores.
•Definition of Multi-core system optimization
• Multi-core system optimization improve system performance and energy
consumption by tuning the system to the application’s runtime behavior and
resource requirements.
What is Cache Tuning?
•Cache tuning the task of choosing the best configuration of cache design parameters
for a particular application, or for a particular phase of an application, such that
performance, power and/or energy are optimized.
Cache Tuning Process
Step 1:- Execute the application for one tuning interval in each potential
configuration (tuning intervals must be long enough for the cache behavior to
stabilize).
Step 2:- Gather cache statistics, such as the number of accesses, misses, and
write backs, for each explored configuration.
Step 3:- Combine the cache statistics with an energy model to determine the
optimal cache configuration.
Step 4:- Fix the cache parameter values to the optimal cache configuration’s
parameter values.
Multi-core Architectural Layout
Multi-core Architectural Layout(cont.)
• Multi- core Architecture consist of:-
1. Arbitrary number of cores
2. A cache tuner
• Each core has a private data cache (L1).
• Global cache tuner connected to each core’s private data cache(L1).
• It calculates the cache tuner heuristics by gathering cache statistics, coordinating
cache tuning among the cores and calculates the cache’s energy consumption.
Multi-core Architectural Layout(cont.)
Overheads in this Multi-core Architecture Layout
• During tuning, applications incur stall cycles while the tuner gathers cache statistics,
calculates energy consumption, and changes the cache configuration.
• These tuning stall cycles introduce Energy and Performance overhead.
• Our tuning heuristic considers these overheads incurred during the tuning stall cycles,
and thus minimizes the number of simultaneously tuned cores and the tuning energy
and performance overheads.
Multi-core Architectural Layout(cont.)
Multi-core Architectural Layout(cont.)
• Figure illustrates the similarities using actual data cache miss rates for an 8-core system
(the cores are denoted as P0 to P7).
•We evaluate cache miss rate similarity by normalizing the caches’ miss rates to the core
with the lowest miss rate.
•In first figure, normalized miss rates are nearly 1.0 for all cores, all caches are classified
as having similar behavior.
•In second figure, normalized miss rates show that P1 has similar cache behavior as P2 to
P7 (i.e. P1 to P7’s normalized miss rates are nearly 3.5), but P0 has different cache
behavior than P1 to P7.
Application Classification Guided Cache
Tuning Heuristic
• Application classification is based on the two things :-
1. Cache Behaviour
2. Data Sharing or Non Data Sharing Application
• Cache accesses and misses are used to determine if data sets have similar cache
behavior.
•In data-sharing application’s if coherence misses attribute to more than 5% of the total
cache misses , then application is classified as data sharing otherwise the application is
non-data-sharing.
Application Classification Guided Cache
Tuning Heuristic(cont.)
Application Classification Guided Cache
Tuning Heuristic(cont.)
• Application classification guided cache tuning heuristic, which consists of three
main steps:
1) Application profiling and initial tuning
2) Application classification
3) Final tuning actions
Application Classification Guided Cache
Tuning Heuristic(cont.)
•Step 1 profiles the application to gather the caches statistics, which are used to determine
cache behavior and data sharing in step 2.
•Step 1 is critical for avoiding redundant cache tuning in situations where the data sets have
similar cache behavior and similar optimal configurations.
•Condition 1 and Condition 2 classify the applications based on whether or not the cores have
similar cache behavior and/or exhibit data sharing, respectively.
•Evaluating these conditions determines the necessary cache tuning effort in Step 3.
•If condition 1 is evaluated as true. In these situations, only a single cache needs to be tuned.
•When final configuration is obtained apply this configuration to all other cores.
Application Classification Guided Cache
Tuning Heuristic(cont.)
•If the data sets have different cache behavior, or Condition 1 is false, tuning is
more complex and several cores must be tuned.
•If the application does not shares data, or Condition 2 is false, the heuristic only
tunes one core from each group and cores can be tuned independently without
affecting the behavior of the other cores.
•If the application shares data, or Condition 2 is true, the heuristic still only tunes
one core from each group but the tuning must be coordinated among the cores.
Experimental Results
• We quantified the energy savings and performance of our heuristic using SPLASH-2
multithreaded application.
• The SPLASH-2 suite is one of the most widely used collections of multithreaded
workloads.
• On the SESC simulator for a 1-, 2-, 4-, 8- and 16- core system. In SESC, we modeled a
heterogeneous system with the L1 data cache parameters.
•Since the L1 data cache has 36 possible configurations, our design space is 36^n where
n is the of cores in the system.
•The L1 instruction cache and L2 unified cache were fixed at the base configuration and
256 KB, 4-way set associative cache with a 64 byte line size, respectively. We modified
SESC to identify coherence misses.
Experimental Results(cont.)
Energy Model for the multi-core system
• total energy = ∑(energy consumed by each core)
• energy consumed by each core:
energy = dynamic_energy + static_energy + fill_energy + writeback_energy + CPU_stall_energy
• dynamic_energy: The dynamic power consumption originates from logic-gate activities in the
CPU.
dynamic_energy = dL1_accesses * dL1_access_energy
• static energy: The static energy consumption enables energy-aware software development.
Static energy is actually not good for the system at all.
static_energy = ((dL1_misses * miss_latency_cycles) + (dL1_hits * hit_latency_cycles) +
(dL1_writebacks * writeback_latency_cycles)) * dL1_static_energy
Experimental Results(cont.)
•fill_energy: fill_energy = dL1_misses * (linesize / wordsize) *mem_read_energy_perword
• writeback_energy: Write back is a storage method in which data is written into the cache
writeback_energy = dL1_writebacks * (linesize / wordsize) *
mem_write_energy_perword
•CPU_stall_energy: CPU_stall_energy = ((dL1_misses * miss_latency_cycles) +
(dL1_writebacks * writeback_latency_cycles)) * CPU_idle_energy
• Our model calculates the dynamic and static energy of each data cache, the energy needed to
fill the cache on a miss, the energy consumed on a cache write back, and the energy consumed
when the processor is stalled during cache fills and write backs.
• We gathered dL1_misses, dL1_hits, and dL1_writebacks cache statistics using SESC.
Experimental Results(cont.)
• We assumed the core’s idle energy (CPU_idle_energy) to be 25% and the static energy
per cycle to be 25% of the cache’s dynamic energy.
• Let the tuning interval of 50,000 cycles.
• Using configuration_energy_per_cycle to determine the energy consumed during each
500,000 cycle tuning interval and the energy consumed in the final configuration.
• Energy savings were calculated by normalizing the energy to the energy consumed
executing the application in the base configuration.
Results and Analysis
•Figure given below depict the energy savings and performance, respectively, for the
optimal configuration determined via exhaustive design space exploration (optimal) for 2-
and 4-core systems and for the final configuration found by our application classification
cache tuning heuristic (heuristic) for 2-, 4-, 8-, and 16-core systems, for each application
and averaged across all applications (Avg).
•Our heuristic achieved 26% and 25% energy savings, incurred 9% and 6% performance
penalties, and achieved average speedups for the 8- and 16-core systems, respectively.
Results and Analysis(Cont..)
• Normalised performance for the optimal cache (optimal) for 2- and 4-core systems and
the final configuration for the application classification cache tuning heuristic for 2-, 4-,
8- and 16-core systems as compared to the systems respective base configurations.
Results and Analysis(Cont..)
• Energy Saving
• We can get this much of energy consumption.
Conclusion
•Our heuristic classified applications based on data sharing and cache behavior, and used
this classification to identify which cores needed to be tuned and to reduce the number
of cores being tuned simultaneously.
Future Work
•Our heuristic searched at most 1% of the design space, yielded configurations within 2%
of the optimal, and achieved an average of 25% energy savings.
•In future work we plan to investigate how our heuristic will be applicable to a larger
system with hundreds of cores.
An application classification guided cache tuning heuristic for

More Related Content

What's hot (19)

PPT
Les 14 perf_db
Femi Adeyemi
 
PPTX
Adaptive Query Optimization
Christian Antognini
 
PDF
Comparative Study on the Performance of A Coherency-based Simple Dynamic Equi...
IJAPEJOURNAL
 
PDF
Energy efficient-resource-allocation-in-distributed-computing-systems
Cemal Ardil
 
PDF
A multi objective hybrid aco-pso optimization algorithm for virtual machine p...
eSAT Publishing House
 
PPT
Les 16 resource
Femi Adeyemi
 
PPTX
Memory management in oracle
Davin Abraham
 
PPT
Les 13 memory
Femi Adeyemi
 
PPT
Les 05 create_bu
Femi Adeyemi
 
PDF
Prepare for the Worst: Reliable Data Protection with Oracle RMAN and Oracle D...
Szymon Skorupinski
 
PDF
Thesis-MitchellColgan_LongTerm_PowerSystem_Planning
Elliott Mitchell-Colgan
 
PDF
Oracle Database 12.1.0.2 New Performance Features
Christian Antognini
 
PDF
Comparative study to realize an automatic speaker recognition system
IJECEIAES
 
PPT
Les 10 fl1
Femi Adeyemi
 
PDF
Dynamic task scheduling on multicore automotive ec us
VLSICS Design
 
PPT
Les 19 space_db
Femi Adeyemi
 
PPT
Les 01 core
Femi Adeyemi
 
DOC
Windows server power_efficiency___robben_and_worthington__final
Bruce Worthington
 
PPT
Les 04 config_bu
Femi Adeyemi
 
Les 14 perf_db
Femi Adeyemi
 
Adaptive Query Optimization
Christian Antognini
 
Comparative Study on the Performance of A Coherency-based Simple Dynamic Equi...
IJAPEJOURNAL
 
Energy efficient-resource-allocation-in-distributed-computing-systems
Cemal Ardil
 
A multi objective hybrid aco-pso optimization algorithm for virtual machine p...
eSAT Publishing House
 
Les 16 resource
Femi Adeyemi
 
Memory management in oracle
Davin Abraham
 
Les 13 memory
Femi Adeyemi
 
Les 05 create_bu
Femi Adeyemi
 
Prepare for the Worst: Reliable Data Protection with Oracle RMAN and Oracle D...
Szymon Skorupinski
 
Thesis-MitchellColgan_LongTerm_PowerSystem_Planning
Elliott Mitchell-Colgan
 
Oracle Database 12.1.0.2 New Performance Features
Christian Antognini
 
Comparative study to realize an automatic speaker recognition system
IJECEIAES
 
Les 10 fl1
Femi Adeyemi
 
Dynamic task scheduling on multicore automotive ec us
VLSICS Design
 
Les 19 space_db
Femi Adeyemi
 
Les 01 core
Femi Adeyemi
 
Windows server power_efficiency___robben_and_worthington__final
Bruce Worthington
 
Les 04 config_bu
Femi Adeyemi
 

Viewers also liked (13)

PDF
Dz'iat0310 copy
GWROY
 
DOC
Orientaciones
reyna20121
 
PPS
La chine (guangxi)
Renée Bukay
 
PPTX
WK6ProjKoulagnaR
Rosemary Koulagna
 
PDF
S_aptitud
lido
 
PDF
Nomina de personas autorizadas a legalizar documentacion de brasil
Diego Gebil
 
DOC
Taller investigativo y reflexivo decisiones financierras
reyna20121
 
PDF
Haushaltsauflösung sowie wohnungsauflösung nrw
NRW Schrott
 
PDF
Cagri Merkezi Mesleki Yabanci Dil Ornek Diyalog 3
Aretiasus
 
PPTX
Innovacio_Menedzsment_Divizo English
Zoltan Galla
 
DOCX
Henry Castro resume feb 2017
Henry Castro
 
PPTX
Family org sg
Julie Ann Ensomo
 
PDF
InternationalConferenceonAgeing-Newsletter
Ermira Pirdeni
 
Dz'iat0310 copy
GWROY
 
Orientaciones
reyna20121
 
La chine (guangxi)
Renée Bukay
 
WK6ProjKoulagnaR
Rosemary Koulagna
 
S_aptitud
lido
 
Nomina de personas autorizadas a legalizar documentacion de brasil
Diego Gebil
 
Taller investigativo y reflexivo decisiones financierras
reyna20121
 
Haushaltsauflösung sowie wohnungsauflösung nrw
NRW Schrott
 
Cagri Merkezi Mesleki Yabanci Dil Ornek Diyalog 3
Aretiasus
 
Innovacio_Menedzsment_Divizo English
Zoltan Galla
 
Henry Castro resume feb 2017
Henry Castro
 
Family org sg
Julie Ann Ensomo
 
InternationalConferenceonAgeing-Newsletter
Ermira Pirdeni
 
Ad

Similar to An application classification guided cache tuning heuristic for (20)

PDF
Ijiret archana-kv-increasing-memory-performance-using-cache-optimizations-in-...
IJIR JOURNALS IJIRUSA
 
PDF
PPT_on_Cache_Partitioning_Techniques.pdf
Gnanavi2
 
PDF
Performance Analysis and Optimizations of CAE Applications (Case Study: STAR_...
Fisnik Kraja
 
PDF
Cache Optimization Techniques for General Purpose Graphic Processing Units
Vajira Thambawita
 
PDF
LCU14-410: How to build an Energy Model for your SoC
Linaro
 
PPTX
A survey on exploring memory optimizations in smartphones
Karthik Iyr
 
PDF
Energy Efficient Computing using Dynamic Tuning
inside-BigData.com
 
PPTX
Energy efficient mobile computing techniques in smartphones
Ninad Hogade
 
PDF
Runtime Methods to Improve Energy Efficiency in HPC Applications
Facultad de Informática UCM
 
PPTX
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
DataStax
 
PPTX
Literature survey presentation
Karthik Iyr
 
PPT
Mobile computing edited
m_hughes
 
PDF
IRJET-A Review on Trends in Multicore Processor Based on Cache and Power Diss...
IRJET Journal
 
PPTX
CPU Memory Hierarchy and Caching Techniques
Dilum Bandara
 
PPTX
System-wide Energy Optimization for Multiple DVS Components and Real-time Tasks
Heechul Yun
 
PDF
Ali.Kamali-MSc.Thesis-SFU
Ali Kamali
 
PDF
ARM® Cortex™ M Energy Optimization - Using Instruction Cache
Raahul Raghavan
 
PPTX
Optimizing High Performance Computing Applications for Energy
David Lecomber
 
PDF
Different Approaches in Energy Efficient Cache Memory
Dhritiman Halder
 
PDF
Acug datafiniti pellon_sept2013
Datafiniti
 
Ijiret archana-kv-increasing-memory-performance-using-cache-optimizations-in-...
IJIR JOURNALS IJIRUSA
 
PPT_on_Cache_Partitioning_Techniques.pdf
Gnanavi2
 
Performance Analysis and Optimizations of CAE Applications (Case Study: STAR_...
Fisnik Kraja
 
Cache Optimization Techniques for General Purpose Graphic Processing Units
Vajira Thambawita
 
LCU14-410: How to build an Energy Model for your SoC
Linaro
 
A survey on exploring memory optimizations in smartphones
Karthik Iyr
 
Energy Efficient Computing using Dynamic Tuning
inside-BigData.com
 
Energy efficient mobile computing techniques in smartphones
Ninad Hogade
 
Runtime Methods to Improve Energy Efficiency in HPC Applications
Facultad de Informática UCM
 
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
DataStax
 
Literature survey presentation
Karthik Iyr
 
Mobile computing edited
m_hughes
 
IRJET-A Review on Trends in Multicore Processor Based on Cache and Power Diss...
IRJET Journal
 
CPU Memory Hierarchy and Caching Techniques
Dilum Bandara
 
System-wide Energy Optimization for Multiple DVS Components and Real-time Tasks
Heechul Yun
 
Ali.Kamali-MSc.Thesis-SFU
Ali Kamali
 
ARM® Cortex™ M Energy Optimization - Using Instruction Cache
Raahul Raghavan
 
Optimizing High Performance Computing Applications for Energy
David Lecomber
 
Different Approaches in Energy Efficient Cache Memory
Dhritiman Halder
 
Acug datafiniti pellon_sept2013
Datafiniti
 
Ad

Recently uploaded (20)

PDF
BEE331-Week 04-SU25.pdf semiconductors UW
faemoxley
 
PPTX
00-ClimateChangeImpactCIAProcess_PPTon23.12.2024-ByDr.VijayanGurumurthyIyer1....
praz3
 
PDF
NOISE CONTROL ppt - SHRESTH SUDHIR KOKNE
SHRESTHKOKNE
 
PPTX
Sensor IC System Design Using COMSOL Multiphysics 2025-July.pptx
James D.B. Wang, PhD
 
PDF
3.-Differential-Calculus-Part-2-NOTES.pdf
KurtMarbinCalicdan1
 
PPT
Hazard identification and risk assessment PPT
SUNILARORA51
 
PDF
mosfet introduction engg topic for students.pdf
trsureshkumardata
 
PDF
POWER PLANT ENGINEERING (R17A0326).pdf..
haneefachosa123
 
PDF
Introduction to Robotics Mechanics and Control 4th Edition by John J. Craig S...
solutionsmanual3
 
PDF
LEARNING CROSS-LINGUAL WORD EMBEDDINGS WITH UNIVERSAL CONCEPTS
kjim477n
 
PPTX
ENSA_Module_8.pptx_nice_ipsec_presentation
RanaMukherjee24
 
PDF
PRIZ Academy - Change Flow Thinking Master Change with Confidence.pdf
PRIZ Guru
 
PDF
13th International Conference of Networks and Communications (NC 2025)
JohannesPaulides
 
PDF
Natural Language processing and web deigning notes
AnithaSakthivel3
 
PDF
SMART HOME AUTOMATION PPT BY - SHRESTH SUDHIR KOKNE
SHRESTHKOKNE
 
PPTX
Abstract Data Types (ADTs) in Data Structures
mwaslam2303
 
PDF
MOBILE AND WEB BASED REMOTE BUSINESS MONITORING SYSTEM
ijait
 
PDF
Natural Language processing and web deigning notes
AnithaSakthivel3
 
PPT
IISM Presentation.ppt Construction safety
lovingrkn
 
PDF
Geothermal Heat Pump ppt-SHRESTH S KOKNE
SHRESTHKOKNE
 
BEE331-Week 04-SU25.pdf semiconductors UW
faemoxley
 
00-ClimateChangeImpactCIAProcess_PPTon23.12.2024-ByDr.VijayanGurumurthyIyer1....
praz3
 
NOISE CONTROL ppt - SHRESTH SUDHIR KOKNE
SHRESTHKOKNE
 
Sensor IC System Design Using COMSOL Multiphysics 2025-July.pptx
James D.B. Wang, PhD
 
3.-Differential-Calculus-Part-2-NOTES.pdf
KurtMarbinCalicdan1
 
Hazard identification and risk assessment PPT
SUNILARORA51
 
mosfet introduction engg topic for students.pdf
trsureshkumardata
 
POWER PLANT ENGINEERING (R17A0326).pdf..
haneefachosa123
 
Introduction to Robotics Mechanics and Control 4th Edition by John J. Craig S...
solutionsmanual3
 
LEARNING CROSS-LINGUAL WORD EMBEDDINGS WITH UNIVERSAL CONCEPTS
kjim477n
 
ENSA_Module_8.pptx_nice_ipsec_presentation
RanaMukherjee24
 
PRIZ Academy - Change Flow Thinking Master Change with Confidence.pdf
PRIZ Guru
 
13th International Conference of Networks and Communications (NC 2025)
JohannesPaulides
 
Natural Language processing and web deigning notes
AnithaSakthivel3
 
SMART HOME AUTOMATION PPT BY - SHRESTH SUDHIR KOKNE
SHRESTHKOKNE
 
Abstract Data Types (ADTs) in Data Structures
mwaslam2303
 
MOBILE AND WEB BASED REMOTE BUSINESS MONITORING SYSTEM
ijait
 
Natural Language processing and web deigning notes
AnithaSakthivel3
 
IISM Presentation.ppt Construction safety
lovingrkn
 
Geothermal Heat Pump ppt-SHRESTH S KOKNE
SHRESTHKOKNE
 

An application classification guided cache tuning heuristic for

  • 1. An Application Classification Guided Cache Tuning Heuristic for Multi-core Architectures PRESENTED BY:- GUIDED BY:- DEBABRATA PAUL CHOWDHURY(14014081002) PRO F. PRASHANT MODI KHYATI RAJPUT (14014081007) (UVPCE) M.TECH-CE(SEM -II)
  • 2. Contents • Introduction • Multi core System Optimization • Cache Tuning • Cache Tuning Process • Multi-core Architectural Layout • Application Classification Guided Cache Tuning Heuristic • Experimental Work • Conclusion
  • 3. Introduction Basic Concepts • Single Core :- In single core architecture, computing component having one independent processing unit.
  • 4. Introduction(cont.) • Multi Core:- • In multi core architecture single computing component with two or more independent actual processing units (called "cores"). • Run multiple instructions of a program at the same time, increasing overall speed for programs –”Parallel Computing”.
  • 5. Muti-Core System Optimization •Previous multi-core cache optimizations only focused on improving performance(such as number of Hits, misses, and write backs). •But now multi-core optimizations focused on reducing energy consumption via tuning individual cores. •Definition of Multi-core system optimization • Multi-core system optimization improve system performance and energy consumption by tuning the system to the application’s runtime behavior and resource requirements.
  • 6. What is Cache Tuning? •Cache tuning the task of choosing the best configuration of cache design parameters for a particular application, or for a particular phase of an application, such that performance, power and/or energy are optimized.
  • 7. Cache Tuning Process Step 1:- Execute the application for one tuning interval in each potential configuration (tuning intervals must be long enough for the cache behavior to stabilize). Step 2:- Gather cache statistics, such as the number of accesses, misses, and write backs, for each explored configuration. Step 3:- Combine the cache statistics with an energy model to determine the optimal cache configuration. Step 4:- Fix the cache parameter values to the optimal cache configuration’s parameter values.
  • 9. Multi-core Architectural Layout(cont.) • Multi- core Architecture consist of:- 1. Arbitrary number of cores 2. A cache tuner • Each core has a private data cache (L1). • Global cache tuner connected to each core’s private data cache(L1). • It calculates the cache tuner heuristics by gathering cache statistics, coordinating cache tuning among the cores and calculates the cache’s energy consumption.
  • 10. Multi-core Architectural Layout(cont.) Overheads in this Multi-core Architecture Layout • During tuning, applications incur stall cycles while the tuner gathers cache statistics, calculates energy consumption, and changes the cache configuration. • These tuning stall cycles introduce Energy and Performance overhead. • Our tuning heuristic considers these overheads incurred during the tuning stall cycles, and thus minimizes the number of simultaneously tuned cores and the tuning energy and performance overheads.
  • 12. Multi-core Architectural Layout(cont.) • Figure illustrates the similarities using actual data cache miss rates for an 8-core system (the cores are denoted as P0 to P7). •We evaluate cache miss rate similarity by normalizing the caches’ miss rates to the core with the lowest miss rate. •In first figure, normalized miss rates are nearly 1.0 for all cores, all caches are classified as having similar behavior. •In second figure, normalized miss rates show that P1 has similar cache behavior as P2 to P7 (i.e. P1 to P7’s normalized miss rates are nearly 3.5), but P0 has different cache behavior than P1 to P7.
  • 13. Application Classification Guided Cache Tuning Heuristic • Application classification is based on the two things :- 1. Cache Behaviour 2. Data Sharing or Non Data Sharing Application • Cache accesses and misses are used to determine if data sets have similar cache behavior. •In data-sharing application’s if coherence misses attribute to more than 5% of the total cache misses , then application is classified as data sharing otherwise the application is non-data-sharing.
  • 14. Application Classification Guided Cache Tuning Heuristic(cont.)
  • 15. Application Classification Guided Cache Tuning Heuristic(cont.) • Application classification guided cache tuning heuristic, which consists of three main steps: 1) Application profiling and initial tuning 2) Application classification 3) Final tuning actions
  • 16. Application Classification Guided Cache Tuning Heuristic(cont.) •Step 1 profiles the application to gather the caches statistics, which are used to determine cache behavior and data sharing in step 2. •Step 1 is critical for avoiding redundant cache tuning in situations where the data sets have similar cache behavior and similar optimal configurations. •Condition 1 and Condition 2 classify the applications based on whether or not the cores have similar cache behavior and/or exhibit data sharing, respectively. •Evaluating these conditions determines the necessary cache tuning effort in Step 3. •If condition 1 is evaluated as true. In these situations, only a single cache needs to be tuned. •When final configuration is obtained apply this configuration to all other cores.
  • 17. Application Classification Guided Cache Tuning Heuristic(cont.) •If the data sets have different cache behavior, or Condition 1 is false, tuning is more complex and several cores must be tuned. •If the application does not shares data, or Condition 2 is false, the heuristic only tunes one core from each group and cores can be tuned independently without affecting the behavior of the other cores. •If the application shares data, or Condition 2 is true, the heuristic still only tunes one core from each group but the tuning must be coordinated among the cores.
  • 18. Experimental Results • We quantified the energy savings and performance of our heuristic using SPLASH-2 multithreaded application. • The SPLASH-2 suite is one of the most widely used collections of multithreaded workloads. • On the SESC simulator for a 1-, 2-, 4-, 8- and 16- core system. In SESC, we modeled a heterogeneous system with the L1 data cache parameters. •Since the L1 data cache has 36 possible configurations, our design space is 36^n where n is the of cores in the system. •The L1 instruction cache and L2 unified cache were fixed at the base configuration and 256 KB, 4-way set associative cache with a 64 byte line size, respectively. We modified SESC to identify coherence misses.
  • 19. Experimental Results(cont.) Energy Model for the multi-core system • total energy = ∑(energy consumed by each core) • energy consumed by each core: energy = dynamic_energy + static_energy + fill_energy + writeback_energy + CPU_stall_energy • dynamic_energy: The dynamic power consumption originates from logic-gate activities in the CPU. dynamic_energy = dL1_accesses * dL1_access_energy • static energy: The static energy consumption enables energy-aware software development. Static energy is actually not good for the system at all. static_energy = ((dL1_misses * miss_latency_cycles) + (dL1_hits * hit_latency_cycles) + (dL1_writebacks * writeback_latency_cycles)) * dL1_static_energy
  • 20. Experimental Results(cont.) •fill_energy: fill_energy = dL1_misses * (linesize / wordsize) *mem_read_energy_perword • writeback_energy: Write back is a storage method in which data is written into the cache writeback_energy = dL1_writebacks * (linesize / wordsize) * mem_write_energy_perword •CPU_stall_energy: CPU_stall_energy = ((dL1_misses * miss_latency_cycles) + (dL1_writebacks * writeback_latency_cycles)) * CPU_idle_energy • Our model calculates the dynamic and static energy of each data cache, the energy needed to fill the cache on a miss, the energy consumed on a cache write back, and the energy consumed when the processor is stalled during cache fills and write backs. • We gathered dL1_misses, dL1_hits, and dL1_writebacks cache statistics using SESC.
  • 21. Experimental Results(cont.) • We assumed the core’s idle energy (CPU_idle_energy) to be 25% and the static energy per cycle to be 25% of the cache’s dynamic energy. • Let the tuning interval of 50,000 cycles. • Using configuration_energy_per_cycle to determine the energy consumed during each 500,000 cycle tuning interval and the energy consumed in the final configuration. • Energy savings were calculated by normalizing the energy to the energy consumed executing the application in the base configuration.
  • 22. Results and Analysis •Figure given below depict the energy savings and performance, respectively, for the optimal configuration determined via exhaustive design space exploration (optimal) for 2- and 4-core systems and for the final configuration found by our application classification cache tuning heuristic (heuristic) for 2-, 4-, 8-, and 16-core systems, for each application and averaged across all applications (Avg). •Our heuristic achieved 26% and 25% energy savings, incurred 9% and 6% performance penalties, and achieved average speedups for the 8- and 16-core systems, respectively.
  • 23. Results and Analysis(Cont..) • Normalised performance for the optimal cache (optimal) for 2- and 4-core systems and the final configuration for the application classification cache tuning heuristic for 2-, 4-, 8- and 16-core systems as compared to the systems respective base configurations.
  • 24. Results and Analysis(Cont..) • Energy Saving • We can get this much of energy consumption.
  • 25. Conclusion •Our heuristic classified applications based on data sharing and cache behavior, and used this classification to identify which cores needed to be tuned and to reduce the number of cores being tuned simultaneously.
  • 26. Future Work •Our heuristic searched at most 1% of the design space, yielded configurations within 2% of the optimal, and achieved an average of 25% energy savings. •In future work we plan to investigate how our heuristic will be applicable to a larger system with hundreds of cores.