SlideShare a Scribd company logo
optimizing code in compilers using parallel genetic algorithm
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
An overview
• Introduction
• Background
• Methodology
• Experimental results
• Conclusion
2/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Compiler optimization
• Compiler optimization is the technique of minimizing or maximizing some
features of an executable code by tuning the output of a compiler.
• Modern compilers support many different optimization phases and these phases
should analyze the code and should produce semantically equivalent
performance enhanced code.
• The three vital parameters defining enhancement of the performance are:
Executiontime
Sizeofcode
Introduction
4/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
• The compiler optimization phase ordering not only possesses challenges to
compiler developer but also for multithreaded programmer to enhance the
performance of Multicore systems.
• Many compilers have numerous optimization techniques which are applied in
predetermined ordering.
• These ordering of optimization techniques may not always give an optimal code.
CODE
Search Space
Introduction
The phase ordering
4/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Optimization flags
• The variation in optimization phase ordering depends on the application that is
compiled, the architecture of the machine on which it runs and the compiler
implementation.
• Many compilers allow optimization flags to be set by the users.
• Turning on optimization flags makes the compiler attempt to improve the
performance and code size at the expense of compilation time.
Introduction
5/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
GNU compiler collection
• The GNU Compiler Collection (GCC) includes front ends for C, C++, Objective C,
Fortran, Java, Ada, and Go, as well as libraries for these languages
• In order to control compilation-time and compiler memory usage, and
the trade-offs between speed and space for the resulting executable, GCC
provides a range of general optimization levels, numbered from 0–3, as
well as individual options for specific types of optimization.
O3
O2
O1
Background
6/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Optimization levels
• The impact of the different optimization levels on the input code is as described
below:
-O0 or no-O
(default)
Optimization
Easy bug
elimination
Background
7/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Optimization levels
1. Less compile time.
2. smaller and faster
executable code.
A lot of simple
optimizations
eliminates
redundancy
-O1 or -O
Background
8/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Optimization levels
• Only optimizations that do not require any speed-space tradeoffs
are used, so the executable should not increase in size.
-O2
1. maximum
optimization without
increasing the
executable size
O1+ additional
optimizations
instruction
scheduling
1. More compile time.
2. More memory usage.the best choice
for deployment of a program
Background
8/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Optimization levels
-O3
1. faster executable
code
2. Maximum Loop
optimization
O1+ O2+
more expensive
optimizations
function
inlining
1. Bulky code
Background
8/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Optimization level
Background
9/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
The challenge
sequential quick sort Parallel quick sort
Which optimization level??
overhead of inter-process communication
Background
10/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Genetic algorithm
Initial population
Selection
Crossover & mutation
Intermediate population
(mating pool)
Replacement
Next population
Background
11/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
PGA for Compiler Optimization
• The work in this research uses GCC 4.8 compiler on Ubuntu 12.04 with OpenMP
3.0 library.
Methodology
12/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
The master-slave model
• In the master-slave model the
master runs the evolutionary
algorithm, controls the slaves
and distributes the work.
• The Slaves take batches of
individuals from the master
and evaluate them. Finally
send the calculated fitness
value back to master.
Methodology
13/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Encoding
1101011101
Methodology
14/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Fitness function
• In the proposed system the PGA works with a population of six Chromosomes on
eight core machine and fitness function is computed at the Master core.
Fitness=|(exe_with_flagi-exe_without_flagi)|
i∈ {1, 2, … . , 12}
Master Node
Generate
random
population
evaluates all
individuals
Slave nodes
algorithm
After 200 generations
Methodology
15/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Algorithm for Slave Nodes
Receives all the chromosomes from the master node with the fitness values.
The slave cores apply the roulette wheel, Stochastic Universal Sampling and Elitism
methods respectively for selection process in parallel
Create next generation applying two point crossover.
Applies mutations using method, two position interchange and produce two new
offspring/chromosomes.
Sends both the chromosomes back to the master-node. (The master collects chromosomes
from all slaves.)
Step 1
Step 2
Step 3
Step 4
Step 5
Methodology
16/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Selection
Methodology
17/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Crossover and mutation
Two point
crossover
Swap mutation
Methodology
18/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Benchmarks
• All the Bench mark programs are parallelized using OpenMP library to reap the
benefits of PGA.
Experimental results
19/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Performance analysis
• As one sees the Figures, the results after applying PGA (WGAO) presents a major
improvement with respect to the random optimization (WRO) and compiling
code without applying optimization (WOO).
Experimental results
20/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Performance analysis
Experimental results
21/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Performance analysis
Experimental results
21/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Performance analysis
Experimental results
21/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Performance analysis
Experimental results
21/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Conclusion
• In Compiler Optimization research, the phase ordering is an important
performance enhancement problem.
• This study indicates that by increasing the number of cores the performance of
the benchmark program increases along with the usage of PGA.
• The major concern in the experiment is the master core waiting time collect
values from slaves which is primarily due to the usage of Synchronized
communication between the Master-Slave cores in the system.
• Further it may be explicitly noted that apart from PRIMS algorithm for core-8
system all other Bench marks exhibit better average performance.
22/22
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
Thanks for
your attention.
Introduction Background Methodology Experimental results
Optimizing Code using Parallel Genetic Algorithm93/3/5
References
[1] Satish Kumar T., Sakthivel S. and Sushil Kumar S., “Optimizing Code by Selecting Compiler Flags
using Parallel Genetic Algorithm on Multicore CPUs,” International Journal of Engineering and
Technology, Vol. 32, No. 5, 2014.
[2] Prathibha B., Sarojadevi H., Harsha P., “Compiler Optimization: A Genetic Algorithm Approach,”
International Journal of Computer Applications, Vol. 112, No.10, 2015.

More Related Content

What's hot (20)

PPTX
Agile Practices - eXtreme Programming
Aniruddha Chakrabarti
 
PPTX
Extreme Programming ppt
OECLIB Odisha Electronics Control Library
 
PDF
eXtreme programming (XP) - An Overview
Gurtej Pal Singh
 
PPTX
Extreme Programming (XP) for Dummies
Jon McNestrie
 
PDF
Introduction to Extreme Programming
Naresh Jain
 
PPT
XP Explained
vineet
 
PPTX
Xp(Xtreme Programming) presentation
MuaazZubairi
 
PPTX
extreme programming
himanshumunjal
 
PPTX
Extreme programming
Chuu Htet Naing
 
PPT
extreme Programming
Bilal Shah
 
PDF
Audrys Kažukauskas - Introduction into Extreme Programming
Agile Lietuva
 
PPT
ABC of Agile (Scrum & Extreme Programming)
Amardeep Vishwakarma
 
PPTX
Extreme Programming
Shankar Dahal
 
PPTX
Extreme programming - a quick and agile overview !
Vinit Kumar Singh
 
PPT
Extreme programming
aaina_katyal
 
PPTX
Going extreme-with-extreme-programming
Michael Green
 
ODP
Extreme Programming
Knoldus Inc.
 
PDF
Extreme programming
Mr SMAK
 
ODP
Xtreme Programming
Prasad Kancharla
 
PPTX
Introduction to Software Engineering
International Islamic University Islamabad
 
Agile Practices - eXtreme Programming
Aniruddha Chakrabarti
 
eXtreme programming (XP) - An Overview
Gurtej Pal Singh
 
Extreme Programming (XP) for Dummies
Jon McNestrie
 
Introduction to Extreme Programming
Naresh Jain
 
XP Explained
vineet
 
Xp(Xtreme Programming) presentation
MuaazZubairi
 
extreme programming
himanshumunjal
 
Extreme programming
Chuu Htet Naing
 
extreme Programming
Bilal Shah
 
Audrys Kažukauskas - Introduction into Extreme Programming
Agile Lietuva
 
ABC of Agile (Scrum & Extreme Programming)
Amardeep Vishwakarma
 
Extreme Programming
Shankar Dahal
 
Extreme programming - a quick and agile overview !
Vinit Kumar Singh
 
Extreme programming
aaina_katyal
 
Going extreme-with-extreme-programming
Michael Green
 
Extreme Programming
Knoldus Inc.
 
Extreme programming
Mr SMAK
 
Xtreme Programming
Prasad Kancharla
 
Introduction to Software Engineering
International Islamic University Islamabad
 

Viewers also liked (20)

PPTX
Genetic Algorithm
Fatemeh Karimi
 
PPTX
Clustering using GA and Hill-climbing
Fatemeh Karimi
 
PDF
An OpenCL Method of Parallel Sorting Algorithms for GPU Architecture
Waqas Tariq
 
PDF
24 Multithreaded Algorithms
Andres Mendez-Vazquez
 
PPTX
University Assignment Literacy Assessment
mforrester
 
PPTX
Art exibition
maleemoha
 
PPTX
Powerpoint Presentation
Sumesh SV
 
PDF
Paginas libres
INGRID
 
PPSX
Data Structures by Yaman Singhania
Yaman Singhania
 
DOCX
Menupra1
norliza khairuddin
 
DOCX
huruf prasekolah
norliza khairuddin
 
PDF
The Sleeping Beauty
Inés Tarradellas
 
PDF
WhoIsFrancisFairley
Francis Fairley
 
PPSX
POWER POINT PRESENTATION
Sumesh SV
 
PPSX
POWR POINT PRESENTATION
Sumesh SV
 
PPT
Clarke slideshare
mleigh7
 
PPT
EFFECTS OF SOCIAL MEDIA ON YOUTH
Yaman Singhania
 
PDF
Getting Started with MongoDB
Michael Redlich
 
PPTX
Kerajaankalingga
Pak Yayak
 
PPSX
Power point Presentation
Sumesh SV
 
Genetic Algorithm
Fatemeh Karimi
 
Clustering using GA and Hill-climbing
Fatemeh Karimi
 
An OpenCL Method of Parallel Sorting Algorithms for GPU Architecture
Waqas Tariq
 
24 Multithreaded Algorithms
Andres Mendez-Vazquez
 
University Assignment Literacy Assessment
mforrester
 
Art exibition
maleemoha
 
Powerpoint Presentation
Sumesh SV
 
Paginas libres
INGRID
 
Data Structures by Yaman Singhania
Yaman Singhania
 
huruf prasekolah
norliza khairuddin
 
The Sleeping Beauty
Inés Tarradellas
 
WhoIsFrancisFairley
Francis Fairley
 
POWER POINT PRESENTATION
Sumesh SV
 
POWR POINT PRESENTATION
Sumesh SV
 
Clarke slideshare
mleigh7
 
EFFECTS OF SOCIAL MEDIA ON YOUTH
Yaman Singhania
 
Getting Started with MongoDB
Michael Redlich
 
Kerajaankalingga
Pak Yayak
 
Power point Presentation
Sumesh SV
 
Ad

Similar to optimizing code in compilers using parallel genetic algorithm (20)

PPTX
Code Optimization
Akhil Kaushik
 
PDF
Gate-Level Simulation Methodology Improving Gate-Level Simulation Performance
suddentrike2
 
PPT
Compiler Optimization-Space Exploration
tmusabbir
 
PDF
Microcontroller Based Testing of Digital IP-Core
VLSICS Design
 
PDF
Performance_Programming
Aristotelis Kotsomitopoulos
 
PDF
Post compiler software optimization for reducing energy
Abhishek Abhyankar
 
PPTX
UVM_Full_Print_n.pptx
nikitha992646
 
PPT
Generating test cases using UML Communication Diagram
Praveen Penumathsa
 
PPTX
JavaMicroBenchmarkpptm
Srinivasan Raghavan
 
PPT
RPG Program for Unit Testing RPG
Greg.Helton
 
PPT
Testing of Object-Oriented Software
Praveen Penumathsa
 
PDF
Performance of Go on Multicore Systems
No J
 
PPTX
A Framework for Scene Recognition Using Convolutional Neural Network as Featu...
Tahmid Abtahi
 
PPTX
Peephole optimization techniques
garishma bhatia
 
PPTX
Optimica Compiler Toolkit - Overview
Modelon
 
PPT
Parallelization of Coupled Cluster Code with OpenMP
Anil Bohare
 
PDF
Viol_Alessandro_Presentazione_prelaurea.pdf
dsecqyvhbowrzxshhf
 
PPTX
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
Soham Mondal
 
PDF
SPCC_Sem6_Chapter 6_Code Optimization part
NiramayKolalle
 
PPTX
Code Optimization In Code Generator In Compiler Design Subject.pptx
22r11a05j2
 
Code Optimization
Akhil Kaushik
 
Gate-Level Simulation Methodology Improving Gate-Level Simulation Performance
suddentrike2
 
Compiler Optimization-Space Exploration
tmusabbir
 
Microcontroller Based Testing of Digital IP-Core
VLSICS Design
 
Performance_Programming
Aristotelis Kotsomitopoulos
 
Post compiler software optimization for reducing energy
Abhishek Abhyankar
 
UVM_Full_Print_n.pptx
nikitha992646
 
Generating test cases using UML Communication Diagram
Praveen Penumathsa
 
JavaMicroBenchmarkpptm
Srinivasan Raghavan
 
RPG Program for Unit Testing RPG
Greg.Helton
 
Testing of Object-Oriented Software
Praveen Penumathsa
 
Performance of Go on Multicore Systems
No J
 
A Framework for Scene Recognition Using Convolutional Neural Network as Featu...
Tahmid Abtahi
 
Peephole optimization techniques
garishma bhatia
 
Optimica Compiler Toolkit - Overview
Modelon
 
Parallelization of Coupled Cluster Code with OpenMP
Anil Bohare
 
Viol_Alessandro_Presentazione_prelaurea.pdf
dsecqyvhbowrzxshhf
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
Soham Mondal
 
SPCC_Sem6_Chapter 6_Code Optimization part
NiramayKolalle
 
Code Optimization In Code Generator In Compiler Design Subject.pptx
22r11a05j2
 
Ad

Recently uploaded (20)

PPTX
The Role of Information Technology in Environmental Protectio....pptx
nallamillisriram
 
PPTX
artificial intelligence applications in Geomatics
NawrasShatnawi1
 
PPTX
VITEEE 2026 Exam Details , Important Dates
SonaliSingh127098
 
PDF
PORTFOLIO Golam Kibria Khan — architect with a passion for thoughtful design...
MasumKhan59
 
PPTX
Day2 B2 Best.pptx
helenjenefa1
 
PPTX
Evaluation and thermal analysis of shell and tube heat exchanger as per requi...
shahveer210504
 
PPTX
Thermal runway and thermal stability.pptx
godow93766
 
PPTX
Product Development & DevelopmentLecture02.pptx
zeeshanwazir2
 
PDF
Unified_Cloud_Comm_Presentation anil singh ppt
anilsingh298751
 
PPTX
Mechanical Design of shell and tube heat exchangers as per ASME Sec VIII Divi...
shahveer210504
 
PPTX
Shinkawa Proposal to meet Vibration API670.pptx
AchmadBashori2
 
PDF
GTU Civil Engineering All Semester Syllabus.pdf
Vimal Bhojani
 
PDF
Introduction to Productivity and Quality
মোঃ ফুরকান উদ্দিন জুয়েল
 
PPTX
Introduction to Neural Networks and Perceptron Learning Algorithm.pptx
Kayalvizhi A
 
PPT
PPT2_Metal formingMECHANICALENGINEEIRNG .ppt
Praveen Kumar
 
PPTX
GitOps_Without_K8s_Training_detailed git repository
DanialHabibi2
 
PPTX
美国电子版毕业证南卡罗莱纳大学上州分校水印成绩单USC学费发票定做学位证书编号怎么查
Taqyea
 
PPTX
Hashing Introduction , hash functions and techniques
sailajam21
 
PPTX
Damage of stability of a ship and how its change .pptx
ehamadulhaque
 
PDF
Zilliz Cloud Demo for performance and scale
Zilliz
 
The Role of Information Technology in Environmental Protectio....pptx
nallamillisriram
 
artificial intelligence applications in Geomatics
NawrasShatnawi1
 
VITEEE 2026 Exam Details , Important Dates
SonaliSingh127098
 
PORTFOLIO Golam Kibria Khan — architect with a passion for thoughtful design...
MasumKhan59
 
Day2 B2 Best.pptx
helenjenefa1
 
Evaluation and thermal analysis of shell and tube heat exchanger as per requi...
shahveer210504
 
Thermal runway and thermal stability.pptx
godow93766
 
Product Development & DevelopmentLecture02.pptx
zeeshanwazir2
 
Unified_Cloud_Comm_Presentation anil singh ppt
anilsingh298751
 
Mechanical Design of shell and tube heat exchangers as per ASME Sec VIII Divi...
shahveer210504
 
Shinkawa Proposal to meet Vibration API670.pptx
AchmadBashori2
 
GTU Civil Engineering All Semester Syllabus.pdf
Vimal Bhojani
 
Introduction to Productivity and Quality
মোঃ ফুরকান উদ্দিন জুয়েল
 
Introduction to Neural Networks and Perceptron Learning Algorithm.pptx
Kayalvizhi A
 
PPT2_Metal formingMECHANICALENGINEEIRNG .ppt
Praveen Kumar
 
GitOps_Without_K8s_Training_detailed git repository
DanialHabibi2
 
美国电子版毕业证南卡罗莱纳大学上州分校水印成绩单USC学费发票定做学位证书编号怎么查
Taqyea
 
Hashing Introduction , hash functions and techniques
sailajam21
 
Damage of stability of a ship and how its change .pptx
ehamadulhaque
 
Zilliz Cloud Demo for performance and scale
Zilliz
 

optimizing code in compilers using parallel genetic algorithm

  • 2. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 An overview • Introduction • Background • Methodology • Experimental results • Conclusion 2/22
  • 3. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Compiler optimization • Compiler optimization is the technique of minimizing or maximizing some features of an executable code by tuning the output of a compiler. • Modern compilers support many different optimization phases and these phases should analyze the code and should produce semantically equivalent performance enhanced code. • The three vital parameters defining enhancement of the performance are: Executiontime Sizeofcode Introduction 4/22
  • 4. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 • The compiler optimization phase ordering not only possesses challenges to compiler developer but also for multithreaded programmer to enhance the performance of Multicore systems. • Many compilers have numerous optimization techniques which are applied in predetermined ordering. • These ordering of optimization techniques may not always give an optimal code. CODE Search Space Introduction The phase ordering 4/22
  • 5. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Optimization flags • The variation in optimization phase ordering depends on the application that is compiled, the architecture of the machine on which it runs and the compiler implementation. • Many compilers allow optimization flags to be set by the users. • Turning on optimization flags makes the compiler attempt to improve the performance and code size at the expense of compilation time. Introduction 5/22
  • 6. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 GNU compiler collection • The GNU Compiler Collection (GCC) includes front ends for C, C++, Objective C, Fortran, Java, Ada, and Go, as well as libraries for these languages • In order to control compilation-time and compiler memory usage, and the trade-offs between speed and space for the resulting executable, GCC provides a range of general optimization levels, numbered from 0–3, as well as individual options for specific types of optimization. O3 O2 O1 Background 6/22
  • 7. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Optimization levels • The impact of the different optimization levels on the input code is as described below: -O0 or no-O (default) Optimization Easy bug elimination Background 7/22
  • 8. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Optimization levels 1. Less compile time. 2. smaller and faster executable code. A lot of simple optimizations eliminates redundancy -O1 or -O Background 8/22
  • 9. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Optimization levels • Only optimizations that do not require any speed-space tradeoffs are used, so the executable should not increase in size. -O2 1. maximum optimization without increasing the executable size O1+ additional optimizations instruction scheduling 1. More compile time. 2. More memory usage.the best choice for deployment of a program Background 8/22
  • 10. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Optimization levels -O3 1. faster executable code 2. Maximum Loop optimization O1+ O2+ more expensive optimizations function inlining 1. Bulky code Background 8/22
  • 11. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Optimization level Background 9/22
  • 12. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 The challenge sequential quick sort Parallel quick sort Which optimization level?? overhead of inter-process communication Background 10/22
  • 13. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Genetic algorithm Initial population Selection Crossover & mutation Intermediate population (mating pool) Replacement Next population Background 11/22
  • 14. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 PGA for Compiler Optimization • The work in this research uses GCC 4.8 compiler on Ubuntu 12.04 with OpenMP 3.0 library. Methodology 12/22
  • 15. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 The master-slave model • In the master-slave model the master runs the evolutionary algorithm, controls the slaves and distributes the work. • The Slaves take batches of individuals from the master and evaluate them. Finally send the calculated fitness value back to master. Methodology 13/22
  • 16. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Encoding 1101011101 Methodology 14/22
  • 17. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Fitness function • In the proposed system the PGA works with a population of six Chromosomes on eight core machine and fitness function is computed at the Master core. Fitness=|(exe_with_flagi-exe_without_flagi)| i∈ {1, 2, … . , 12} Master Node Generate random population evaluates all individuals Slave nodes algorithm After 200 generations Methodology 15/22
  • 18. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Algorithm for Slave Nodes Receives all the chromosomes from the master node with the fitness values. The slave cores apply the roulette wheel, Stochastic Universal Sampling and Elitism methods respectively for selection process in parallel Create next generation applying two point crossover. Applies mutations using method, two position interchange and produce two new offspring/chromosomes. Sends both the chromosomes back to the master-node. (The master collects chromosomes from all slaves.) Step 1 Step 2 Step 3 Step 4 Step 5 Methodology 16/22
  • 19. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Selection Methodology 17/22
  • 20. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Crossover and mutation Two point crossover Swap mutation Methodology 18/22
  • 21. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Benchmarks • All the Bench mark programs are parallelized using OpenMP library to reap the benefits of PGA. Experimental results 19/22
  • 22. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Performance analysis • As one sees the Figures, the results after applying PGA (WGAO) presents a major improvement with respect to the random optimization (WRO) and compiling code without applying optimization (WOO). Experimental results 20/22
  • 23. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Performance analysis Experimental results 21/22
  • 24. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Performance analysis Experimental results 21/22
  • 25. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Performance analysis Experimental results 21/22
  • 26. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Performance analysis Experimental results 21/22
  • 27. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Conclusion • In Compiler Optimization research, the phase ordering is an important performance enhancement problem. • This study indicates that by increasing the number of cores the performance of the benchmark program increases along with the usage of PGA. • The major concern in the experiment is the master core waiting time collect values from slaves which is primarily due to the usage of Synchronized communication between the Master-Slave cores in the system. • Further it may be explicitly noted that apart from PRIMS algorithm for core-8 system all other Bench marks exhibit better average performance. 22/22
  • 28. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 Thanks for your attention.
  • 29. Introduction Background Methodology Experimental results Optimizing Code using Parallel Genetic Algorithm93/3/5 References [1] Satish Kumar T., Sakthivel S. and Sushil Kumar S., “Optimizing Code by Selecting Compiler Flags using Parallel Genetic Algorithm on Multicore CPUs,” International Journal of Engineering and Technology, Vol. 32, No. 5, 2014. [2] Prathibha B., Sarojadevi H., Harsha P., “Compiler Optimization: A Genetic Algorithm Approach,” International Journal of Computer Applications, Vol. 112, No.10, 2015.

Editor's Notes

  • #5: search space for attempting optimization phase sequences is