SlideShare a Scribd company logo
MPI - MESSAGE PASSING INTERFACE
An approach for parallel algorithms
COMPUTER SCIENCE DEPARTMENT
PROF. AKHTAR RASOOL
UNDERTHE GUIDANCE OF
TABLE OF CONTENT
Parallel Merge Sort using MPI
Introduction
Background
MPI
Code implementation
Result
About Us
Goal and Vision
Target
Programming language
Cluster
MPI library
Dataset
What is MPI
Data type and syntax
Communication Mode
Features
Merge sort implementation
Flow of code
Process flow
Single processor
Multiprocessor
Summery
About Us
Our Team
Nishaant Sharma
131112226
Kartik
131112265
Mohit Raghuvanshi
131112232
Prabhash Prakash
131112241
“Analysis of RunningTime complexity of Sorting
Algorithm (Merge Sort) on parallel processing
environment (preferably on a cluster with various
processors) and single node Using MPI”
Goal 05
Installation Implementation Result
Install mpich2 on Ubantu Implement merge sort algorithm
parallel
Change in time when we increse
in the number of processor
VISION
Write your relevant text here
06
Background
Programming language
Base of project
 good knowledge of “C” for implementing code
C for implementing the algorithm
 Python for generating the data set
Faker library in python
11
Background
Cluster
Master and Slave Node
 Master Node
 Slave Node
11
Background
MPI
Message passing interface
 MPI is a proposed standard message-passing
interface. It is a library Specification, not a language.
The programs that users can write in Fortran 77 andC
are compiled with Ordinary compilers and linked with
the MPI library.
11
Dataset
Python Script
For generating dataset
We use faker module for generating the dataset of census.
Some basic api of faker
 fake.name() for generating fake name
fake.email() for generating fake email
fake.ean(length=13) for unique id of 13 digit
fake.job() for creating fake job
Many other api
11
Dataset
Python Script
For generating dataset
Dataset attribute
Id
Name
Phone Number
Salary
Email
11
Dataset preview
12
What is MPI
MPI
Message Passing Interface
A message-passing library specifications:
Extended message-passing model
Not a language or compiler specification
Not a specific implementation or product
For parallel computers, clusters, and heterogeneous networks.
Designed to permit the development of parallel software libraries.
Designed to provide access to advanced parallel hardware for
End users
Library writers
Tool developers
13
Communication Modes
Based on the type of send:
Synchronous: Completes once the
acknowledgement is received by the sender.
Buffered send: completes immediately, unless if an
error occurs.
Standard send: completes once the message has
been sent, which may or may not imply that the
message has arrived at its destination.
Ready send: completes immediately, if the receiver
is ready for the message it will get it, otherwise the
message is dropped silently.
DataTypes
The following data types are supported by MPI:
Predefined data types that are corresponding to
data types from the programming language.
Arrays.
Sub blocks of a matrix
User defined data structure.
A set of predefined data types
API of MPI
#include “mpi.h” provides basic MPI definitions and types.
MPI_Init starts MPI
MPI_Finalize exits MPI
Note that all non-MPI routines are local; thus “printf” run on each
process
MPI_INIT
Initializing MPI
The initialization routine MPI_INIT is the first MPI routine
called.
MPI_INIT is called once.
int MPI_INIT( int *argc, char **argv );
MPI_Finalize
After a program has finished using the MPI library, it must call
MPI_Finalize in order to clean up all MPI state. Once this routine
is called, no MPI routine may be called.
MPI_finalize is genrally last command in the program.
MPI_finalize()
MPI_Comm_rank
It updates the rank of processes in the variable
world_rank. Determines the rank of the calling process in the
communicator
int MPI_Comm_rank(MPI_COMM_WORLD,
&wordld_rank);
MPI_Comm_size
Determines the size of the group associated with a
communicator.
int MPI_Comm_size(MPI_COMM comm, int
*size);
MPI_Scatter
Sends data from one process to all other processes in a
communicator.
. int MPI_Scatter(const void *sendbuf, int
sendcount, MPI_Datatype sendtype,void
*recvbuf, int recvcount, MPI_Datatype
recvtype, int root, MPI_Comm comm);
MPI_Scatter
MPI_Gather
MPI_Gather is opposite of MPI_Scatter, it gather all the data from
other processes to single (Master) process.
. int MPI_Gather(const void *sendbuf,
intsendcount, MPI_Datatype sendtype, void
*recvbuf, int recvcount, MPI_Datatype
recvtype, int root, MPI_Comm comm)
MPI_Gather
MPI_Bcast
Broadcasts a message from the process with rank "root" to all
other processes of the communicator.
int MPI_Bcast( void *buffer, int count,
MPI_Datatype datatype, int root, MPI_Comm
comm )
MPI_Bcast
Parallel Merge Sort Algorithm
Result
No. of Process Time elapsed in reading
input file(ms)
Time elapsed to sort
the data(ms)
Time in writing
output file(ms)
1 3982.326 8528.112 5839.587
2 4050.897 6878.000 5401.234
4 8145.740 12073.895 11083.125
5 10178.689 14361.952 13087.155
Conclusion
In this project we focused on using MPI as a
message passing interface implementation on
LINUX platform. The effect of parallel processes
number and also the number of cores on the
performance of parallel merge sort algorithms has
been theoretically and experimentally studied.
ThankYou

More Related Content

What's hot (20)

PDF
Lecture 1 introduction to parallel and distributed computing
Vajira Thambawita
 
PPTX
Information and network security 13 playfair cipher
Vaibhav Khanna
 
PPTX
Message passing ( in computer science)
Computer_ at_home
 
PDF
Basic communication operations - One to all Broadcast
RashiJoshi11
 
PPT
Communications is distributed systems
SHATHAN
 
PPTX
Transport Layer Security (TLS)
Arun Shukla
 
DOC
Unit 1 architecture of distributed systems
karan2190
 
PPT
Distributed Systems
vampugani
 
PPTX
Physical and Logical Clocks
Dilum Bandara
 
PPT
Logical Clocks (Distributed computing)
Sri Prasanna
 
PPTX
Distributed computing
shivli0769
 
PPT
Clock synchronization in distributed system
Sunita Sahu
 
PPTX
Dynamic interconnection networks
Prasenjit Dey
 
PDF
What is Socket Programming in Python | Edureka
Edureka!
 
PPTX
Distributed Transactions(flat and nested) and Atomic Commit Protocols
Sachin Chauhan
 
PPTX
cryptography ppt free download
Twinkal Harsora
 
PPT
Synchronization in distributed systems
SHATHAN
 
PPTX
Communication costs in parallel machines
Syed Zaid Irshad
 
PPTX
Unicasting , Broadcasting And Multicasting New
techbed
 
PDF
Lecture 2 more about parallel computing
Vajira Thambawita
 
Lecture 1 introduction to parallel and distributed computing
Vajira Thambawita
 
Information and network security 13 playfair cipher
Vaibhav Khanna
 
Message passing ( in computer science)
Computer_ at_home
 
Basic communication operations - One to all Broadcast
RashiJoshi11
 
Communications is distributed systems
SHATHAN
 
Transport Layer Security (TLS)
Arun Shukla
 
Unit 1 architecture of distributed systems
karan2190
 
Distributed Systems
vampugani
 
Physical and Logical Clocks
Dilum Bandara
 
Logical Clocks (Distributed computing)
Sri Prasanna
 
Distributed computing
shivli0769
 
Clock synchronization in distributed system
Sunita Sahu
 
Dynamic interconnection networks
Prasenjit Dey
 
What is Socket Programming in Python | Edureka
Edureka!
 
Distributed Transactions(flat and nested) and Atomic Commit Protocols
Sachin Chauhan
 
cryptography ppt free download
Twinkal Harsora
 
Synchronization in distributed systems
SHATHAN
 
Communication costs in parallel machines
Syed Zaid Irshad
 
Unicasting , Broadcasting And Multicasting New
techbed
 
Lecture 2 more about parallel computing
Vajira Thambawita
 

Similar to MPI message passing interface (20)

PPTX
Chimera Tool Crack 41.56.2046 Download 2025
shilldevil5
 
PPTX
CarX Street Deluxe edition v1.4.0 Free Download
elonbuda
 
PPTX
Nickelodeon All Star Brawl 2 v1.13 Free Download
michaelsatle759
 
PPTX
VSO ConvertXto HD Free CRACKS Download .
dshut956
 
PPTX
Replay Media Catcher Free CRACK Download
borikhni
 
PPTX
Mini Airways v0.11.3 TENOKE Free Download
elonbuda
 
PPTX
The Daum PotPlayer Free CRACK Download.
dshut956
 
PPTX
Horizon Zero Dawn Remastered Free Download For PC
michaelsatle759
 
PDF
Wondershare Filmora Crack 2025 For Windows Free
abbaskanju3
 
PDF
Enscape Latest 2025 Crack Free Download
rnzu5cxw0y
 
PDF
Minitool Partition Wizard Crack Free Download
v3r2eptd2q
 
PDF
IObit Driver Booster Pro 12.3.0.549 Crack 2025
abbaskanju3
 
PDF
iTop VPN Latest Version 2025 Crack Free Download
lr74xqnvuf
 
PDF
Wondershare Filmora Crack Free Download
zqeevcqb3t
 
PDF
Latest FL Studio Crack 24 Free Serial Key [2025]
abbaskanju3
 
PDF
LDPlayer 9.1.20 Latest Crack Free Download
5ls1bnl9iv
 
PDF
Tenorshare 4uKey Crack Fre e Download
oyv9tzurtx
 
PDF
IObit Driver Booster Pro Crack v11.2.0.46 & Serial Key [2025]
shahban786ajmal
 
PDF
Parallel and Distributed Computing Chapter 10
AbdullahMunir32
 
Chimera Tool Crack 41.56.2046 Download 2025
shilldevil5
 
CarX Street Deluxe edition v1.4.0 Free Download
elonbuda
 
Nickelodeon All Star Brawl 2 v1.13 Free Download
michaelsatle759
 
VSO ConvertXto HD Free CRACKS Download .
dshut956
 
Replay Media Catcher Free CRACK Download
borikhni
 
Mini Airways v0.11.3 TENOKE Free Download
elonbuda
 
The Daum PotPlayer Free CRACK Download.
dshut956
 
Horizon Zero Dawn Remastered Free Download For PC
michaelsatle759
 
Wondershare Filmora Crack 2025 For Windows Free
abbaskanju3
 
Enscape Latest 2025 Crack Free Download
rnzu5cxw0y
 
Minitool Partition Wizard Crack Free Download
v3r2eptd2q
 
IObit Driver Booster Pro 12.3.0.549 Crack 2025
abbaskanju3
 
iTop VPN Latest Version 2025 Crack Free Download
lr74xqnvuf
 
Wondershare Filmora Crack Free Download
zqeevcqb3t
 
Latest FL Studio Crack 24 Free Serial Key [2025]
abbaskanju3
 
LDPlayer 9.1.20 Latest Crack Free Download
5ls1bnl9iv
 
Tenorshare 4uKey Crack Fre e Download
oyv9tzurtx
 
IObit Driver Booster Pro Crack v11.2.0.46 & Serial Key [2025]
shahban786ajmal
 
Parallel and Distributed Computing Chapter 10
AbdullahMunir32
 
Ad

Recently uploaded (20)

PPTX
Finding Your License Details in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PDF
Download Canva Pro 2025 PC Crack Full Latest Version
bashirkhan333g
 
PPTX
Hardware(Central Processing Unit ) CU and ALU
RizwanaKalsoom2
 
PDF
MiniTool Partition Wizard 12.8 Crack License Key LATEST
hashhshs786
 
PPTX
Milwaukee Marketo User Group - Summer Road Trip: Mapping and Personalizing Yo...
bbedford2
 
PDF
Wondershare PDFelement Pro Crack for MacOS New Version Latest 2025
bashirkhan333g
 
PDF
MiniTool Partition Wizard Free Crack + Full Free Download 2025
bashirkhan333g
 
PPTX
Transforming Mining & Engineering Operations with Odoo ERP | Streamline Proje...
SatishKumar2651
 
PDF
Driver Easy Pro 6.1.1 Crack Licensce key 2025 FREE
utfefguu
 
PPTX
Change Common Properties in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PDF
[Solution] Why Choose the VeryPDF DRM Protector Custom-Built Solution for You...
Lingwen1998
 
PDF
Unlock Efficiency with Insurance Policy Administration Systems
Insurance Tech Services
 
PDF
Top Agile Project Management Tools for Teams in 2025
Orangescrum
 
PPTX
OpenChain @ OSS NA - In From the Cold: Open Source as Part of Mainstream Soft...
Shane Coughlan
 
PDF
유니티에서 Burst Compiler+ThreadedJobs+SIMD 적용사례
Seongdae Kim
 
PPTX
Agentic Automation Journey Series Day 2 – Prompt Engineering for UiPath Agents
klpathrudu
 
PDF
TheFutureIsDynamic-BoxLang witch Luis Majano.pdf
Ortus Solutions, Corp
 
PDF
Odoo CRM vs Zoho CRM: Honest Comparison 2025
Odiware Technologies Private Limited
 
PDF
iTop VPN With Crack Lifetime Activation Key-CODE
utfefguu
 
PDF
Automate Cybersecurity Tasks with Python
VICTOR MAESTRE RAMIREZ
 
Finding Your License Details in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
Download Canva Pro 2025 PC Crack Full Latest Version
bashirkhan333g
 
Hardware(Central Processing Unit ) CU and ALU
RizwanaKalsoom2
 
MiniTool Partition Wizard 12.8 Crack License Key LATEST
hashhshs786
 
Milwaukee Marketo User Group - Summer Road Trip: Mapping and Personalizing Yo...
bbedford2
 
Wondershare PDFelement Pro Crack for MacOS New Version Latest 2025
bashirkhan333g
 
MiniTool Partition Wizard Free Crack + Full Free Download 2025
bashirkhan333g
 
Transforming Mining & Engineering Operations with Odoo ERP | Streamline Proje...
SatishKumar2651
 
Driver Easy Pro 6.1.1 Crack Licensce key 2025 FREE
utfefguu
 
Change Common Properties in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
[Solution] Why Choose the VeryPDF DRM Protector Custom-Built Solution for You...
Lingwen1998
 
Unlock Efficiency with Insurance Policy Administration Systems
Insurance Tech Services
 
Top Agile Project Management Tools for Teams in 2025
Orangescrum
 
OpenChain @ OSS NA - In From the Cold: Open Source as Part of Mainstream Soft...
Shane Coughlan
 
유니티에서 Burst Compiler+ThreadedJobs+SIMD 적용사례
Seongdae Kim
 
Agentic Automation Journey Series Day 2 – Prompt Engineering for UiPath Agents
klpathrudu
 
TheFutureIsDynamic-BoxLang witch Luis Majano.pdf
Ortus Solutions, Corp
 
Odoo CRM vs Zoho CRM: Honest Comparison 2025
Odiware Technologies Private Limited
 
iTop VPN With Crack Lifetime Activation Key-CODE
utfefguu
 
Automate Cybersecurity Tasks with Python
VICTOR MAESTRE RAMIREZ
 
Ad

MPI message passing interface

  • 1. MPI - MESSAGE PASSING INTERFACE An approach for parallel algorithms COMPUTER SCIENCE DEPARTMENT PROF. AKHTAR RASOOL UNDERTHE GUIDANCE OF
  • 2. TABLE OF CONTENT Parallel Merge Sort using MPI Introduction Background MPI Code implementation Result About Us Goal and Vision Target Programming language Cluster MPI library Dataset What is MPI Data type and syntax Communication Mode Features Merge sort implementation Flow of code Process flow Single processor Multiprocessor Summery
  • 4. Our Team Nishaant Sharma 131112226 Kartik 131112265 Mohit Raghuvanshi 131112232 Prabhash Prakash 131112241
  • 5. “Analysis of RunningTime complexity of Sorting Algorithm (Merge Sort) on parallel processing environment (preferably on a cluster with various processors) and single node Using MPI” Goal 05
  • 6. Installation Implementation Result Install mpich2 on Ubantu Implement merge sort algorithm parallel Change in time when we increse in the number of processor VISION Write your relevant text here 06
  • 7. Background Programming language Base of project  good knowledge of “C” for implementing code C for implementing the algorithm  Python for generating the data set Faker library in python 11
  • 8. Background Cluster Master and Slave Node  Master Node  Slave Node 11
  • 9. Background MPI Message passing interface  MPI is a proposed standard message-passing interface. It is a library Specification, not a language. The programs that users can write in Fortran 77 andC are compiled with Ordinary compilers and linked with the MPI library. 11
  • 10. Dataset Python Script For generating dataset We use faker module for generating the dataset of census. Some basic api of faker  fake.name() for generating fake name fake.email() for generating fake email fake.ean(length=13) for unique id of 13 digit fake.job() for creating fake job Many other api 11
  • 11. Dataset Python Script For generating dataset Dataset attribute Id Name Phone Number Salary Email 11
  • 13. What is MPI MPI Message Passing Interface A message-passing library specifications: Extended message-passing model Not a language or compiler specification Not a specific implementation or product For parallel computers, clusters, and heterogeneous networks. Designed to permit the development of parallel software libraries. Designed to provide access to advanced parallel hardware for End users Library writers Tool developers 13
  • 14. Communication Modes Based on the type of send: Synchronous: Completes once the acknowledgement is received by the sender. Buffered send: completes immediately, unless if an error occurs. Standard send: completes once the message has been sent, which may or may not imply that the message has arrived at its destination. Ready send: completes immediately, if the receiver is ready for the message it will get it, otherwise the message is dropped silently.
  • 15. DataTypes The following data types are supported by MPI: Predefined data types that are corresponding to data types from the programming language. Arrays. Sub blocks of a matrix User defined data structure. A set of predefined data types
  • 16. API of MPI #include “mpi.h” provides basic MPI definitions and types. MPI_Init starts MPI MPI_Finalize exits MPI Note that all non-MPI routines are local; thus “printf” run on each process
  • 17. MPI_INIT Initializing MPI The initialization routine MPI_INIT is the first MPI routine called. MPI_INIT is called once. int MPI_INIT( int *argc, char **argv );
  • 18. MPI_Finalize After a program has finished using the MPI library, it must call MPI_Finalize in order to clean up all MPI state. Once this routine is called, no MPI routine may be called. MPI_finalize is genrally last command in the program. MPI_finalize()
  • 19. MPI_Comm_rank It updates the rank of processes in the variable world_rank. Determines the rank of the calling process in the communicator int MPI_Comm_rank(MPI_COMM_WORLD, &wordld_rank);
  • 20. MPI_Comm_size Determines the size of the group associated with a communicator. int MPI_Comm_size(MPI_COMM comm, int *size);
  • 21. MPI_Scatter Sends data from one process to all other processes in a communicator. . int MPI_Scatter(const void *sendbuf, int sendcount, MPI_Datatype sendtype,void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm);
  • 23. MPI_Gather MPI_Gather is opposite of MPI_Scatter, it gather all the data from other processes to single (Master) process. . int MPI_Gather(const void *sendbuf, intsendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)
  • 25. MPI_Bcast Broadcasts a message from the process with rank "root" to all other processes of the communicator. int MPI_Bcast( void *buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm )
  • 27. Parallel Merge Sort Algorithm
  • 28. Result No. of Process Time elapsed in reading input file(ms) Time elapsed to sort the data(ms) Time in writing output file(ms) 1 3982.326 8528.112 5839.587 2 4050.897 6878.000 5401.234 4 8145.740 12073.895 11083.125 5 10178.689 14361.952 13087.155
  • 29. Conclusion In this project we focused on using MPI as a message passing interface implementation on LINUX platform. The effect of parallel processes number and also the number of cores on the performance of parallel merge sort algorithms has been theoretically and experimentally studied.