2.1
Message-Passing Computing
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, 2010. Aug 26, 2010.
2.2
Software Tools for Clusters
Late 1980’s Parallel Virtual Machine (PVM) -
developed Became very popular.
Mid 1990’s - Message-Passing Interface (MPI) -
standard defined.
Based upon Message Passing Parallel
Programming model.
Both provide a set of user-level libraries for
message passing. Use with sequential
programming languages (C, C++, ...).
2.3
MPI
(Message Passing Interface)
• Message passing library standard developed
by group of academics and industrial partners
to foster more widespread use and portability.
• Defines routines, not implementation.
• Several free implementations exist.
2.4
Message passing concept
using library routines
2.5
Message routing between computers typically done by daemon processes
installed on computers that form the “virtual machine”.
Application
daemon process
program
Workstation
Application
program
Application
program
Workstation
Workstation
Messages
sent through
network
(executable)
(executable)
(executable)
.
Can have more than one process
running on each computer.
2.6
Message-Passing Programming using
User-level Message-Passing Libraries
Two primary mechanisms needed:
1. A method of creating processes for execution on
different computers
2. A method of sending and receiving messages
Creating processes on
different computers
2.7
2.8
Multiple program, multiple data (MPMD)
model
Source
file
Executable
Processor 0 Processor p - 1
Compile to suit
processor
Source
file
• Different programs executed by each processor
2.9
Single Program Multiple Data (SPMD) model
Source
file
Executables
Processor 0 Processor p - 1
Compile to suit
processor
Basic MPI way
• Same program executed by each processor
• Control statements select different parts for each
processor to execute.
In MPI, processes within a defined communicating
group given a number called a rank starting from
zero onwards.
Program uses control constructs, typically IF
statements, to direct processes to perform specific
actions.
Example
if (rank == 0) ... /* do this */;
if (rank == 1) ... /* do this */;
.
.
.
2.10
Master-Slave approach
Usually computation constructed as a master-slave
model
One process (the master), performs one set of
actions and all the other processes (the slaves)
perform identical actions although on different data,
i.e.
if (rank == 0) ... /* master do this */;
else ... /* all slaves do this */;
2.11
Static process creation
• All executables started together.
• Done when one starts the compiled programs.
• Normal MPI way.
2.12
2.13
Multiple Program Multiple Data (MPMD) Model
with Dynamic Process Creation
Process 1
Process 2spawn();
Time
Start execution
of process 2
• One processor executes master process.
• Other processes started from within master process
Available in MPI-2
Might find
applicability if do
not initially how
many processes
needed.
Does have a
process creation
overhead.
Methods of sending and
receiving messages
2.14
2.15
Basic “point-to-point”
Send and Receive Routines
Process 1 Process 2
send(&x, 2);
recv(&y, 1);
x y
Movement
of data
Generic syntax (actual formats later)
Passing a message between processes using
send() and recv() library calls:
2.16
MPI point-to-point message passing using
MPI_send() and MPI_recv() library calls
Semantics of MPI_Send() and MPI_Recv()
Called blocking, which means in MPI that routine waits until all
its local actions have taken place before returning.
After returning, any local variables used can be altered without
affecting message transfer.
MPI_Send() - Message may not reached its destination but
process can continue in the knowledge that message safely on
its way.
MPI_Recv() – Returns when message received and data
collected. Will cause process to stall until message received.
Other versions of MPI_Send() and MPI_Recv() have different
semantics.
2.17
2.18
Message Tag
• Used to differentiate between different types of
messages being sent.
• Message tag is carried within message.
• If special type matching is not required, a wild
card message tag used. Then recv() will match
with any send().
2.19
Message Tag Example
Process 1 Process 2
send(&x,2, 5);
recv(&y,1, 5);
x y
Movement
of data
Waits for a message from process 1 with a tag of 5
To send a message, x, with message tag 5 from
a source process, 1, to a destination process, 2,
and assign to y:
2.20
Unsafe message passing - Example
lib()
lib()
send(…,1,…);
recv(…,0,…);
Process 0 Process 1
send(…,1,…);
recv(…,0,…);
(a) Intended behavior
(b) Possible behavior
lib()
lib()
send(…,1,…);
recv(…,0,…);
Process 0 Process 1
send(…,1,…);
recv(…,0,…);
Destination
Source
2.21
MPI Solution
“Communicators”
• Defines a communication domain - a set of
processes that are allowed to communicate
between themselves.
• Communication domains of libraries can be
separated from that of a user program.
• Used in all point-to-point and collective MPI
message-passing communications.
2.22
Default Communicator
MPI_COMM_WORLD
• Exists as first communicator for all processes
existing in the application.
• A set of MPI routines exists for forming
communicators.
• Processes have a “rank” in a communicator.
2.23
Using SPMD Computational Model
main (int argc, char *argv[]) {
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /*find rank */
if (myrank == 0)
master();
else
slave();
MPI_Finalize();
}
where master() and slave() are to be executed by master
process and slave process, respectively.
2.24
Parameters of blocking send
MPI_Send(buf, count, datatype, dest, tag, comm)
Address of
Number of items
Datatype of
Rank of destination
Message tag
Communicator
send buffer
to send
each item
process
2.25
Parameters of blocking receive
MPI_Recv(buf, count, datatype, src, tag, comm, status)
Address of
Maximum number
Datatype of
Rank of source
Message tag
Communicator
receive buffer
of items to receive
each item
process
Status
after operation
2.26
Example
To send an integer x from process 0 to process 1,
MPI_Comm_rank(MPI_COMM_WORLD,&myrank); /* find
rank */
if (myrank == 0) {
int x;
MPI_Send(&x, 1, MPI_INT, 1, msgtag,
MPI_COMM_WORLD);
} else if (myrank == 1) {
int x;
MPI_Recv(&x, 1, MPI_INT,
0,msgtag,MPI_COMM_WORLD,status);
}
Sample MPI Hello World program
#include <stddef.h>
#include <stdlib.h>
#include "mpi.h"
main(int argc, char **argv ) {
char message[20];
int i,rank, size, type=99;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
if(rank == 0) {
strcpy(message, "Hello, world");
for (i=1; i<size; i++)
MPI_Send(message,13,MPI_CHAR,i,type,MPI_COMM_WORLD);
} else
MPI_Recv(message,20,MPI_CHAR,0,type,MPI_COMM_WORLD,&status);
printf( "Message from process =%d : %.13sn", rank,message);
MPI_Finalize();
}
2.27
Program sends message “Hello World” from master process
(rank = 0) to each of the other processes (rank != 0). Then, all
processes execute a println statement.
In MPI, standard output automatically redirected from remote
computers to the user’s console so final result will be
Message from process =1 : Hello, world
Message from process =0 : Hello, world
Message from process =2 : Hello, world
Message from process =3 : Hello, world
...
except that the order of messages might be different but is
unlikely to be in ascending order of process ID; it will depend
upon how the processes are scheduled.
2.28
Setting Up the Message Passing
Environment
Usually computers specified in a file, called a hostfile
or machines file.
File contains names of computers and possibly
number of processes that should run on each
computer.
Implementation-specific algorithm selects computers
from list to run user programs.
2.29
Users may create their own machines file for their
program.
Example
coit-grid01.uncc.edu
coit-grid02.uncc.edu
coit-grid03.uncc.edu
coit-grid04.uncc.edu
coit-grid05.uncc.edu
If a machines file not specified, a default machines
file used or it may be that program will only run on a
single computer.
2.30
2.31
Compiling/Executing MPI Programs
• Minor differences in the command lines required
depending upon MPI implementation.
• For the assignments, we will use MPICH or
MPICH-2.
• Generally, a machines file need to be present that
lists all the computers to be used. MPI then uses
those computers listed. Otherwise it will simply
run on one computer
2.32
MPICH and MPICH-2
• Both Windows and Linux versions
• Very easy to install on a Windows system.
2.33
MPICH Commands
Two basic commands:
• mpicc, a script to compile MPI programs
• mpirun, the original command to execute
an MPI program, or
• mpiexec - MPI-2 standard command mpiexec
replaces mpirun although mpirun still exists.
2.34
Compiling/executing (SPMD) MPI program
For MPICH. At a command line:
To start MPI: Nothing special.
To compile MPI programs:
for C mpicc -o prog prog.c
for C++ mpiCC -o prog prog.cpp
To execute MPI program:
mpiexec -n no_procs prog
or
mpirun -np no_procs prog
A positive integer
2.35
Executing MPICH program on
multiple computers
Create a file called say “machines”
containing the list of machines, say:
coit-grid01.uncc.edu
coit-grid02.uncc.edu
coit-grid03.uncc.edu
coit-grid04.uncc.edu
coit-grid05.uncc.edu
2.36
mpiexec -machinefile machines -n 4 prog
would run prog with four processes.
Each processes would execute on one of machines
in list. MPI would cycle through list of machines
giving processes to machines.
Can also specify number of processes on a
particular machine by adding that number after
machine name.)
2.37
Debugging/Evaluating Parallel
Programs Empirically
2.38
Visualization Tools
Programs can be watched as they are executed in
a space-time diagram (or process-time diagram):
Process 1
Process 2
Process 3
TimeComputing
Waiting
Message-passing system routine
Message
2.39
Implementations of visualization tools are
available for MPI.
An example is the Upshot program visualization
system.
2.40
Evaluating Programs Empirically
Measuring Execution Time
To measure execution time between point L1 and point
L2 in code, might have construction such as:
.
L1: time(&t1); /* start timer */
.
.
L2: time(&t2); /* stop timer */
.
elapsed_Time = difftime(t2, t1); /*time=t2-t1*/
printf(“Elapsed time=%5.2f secs”,elapsed_Time);
2.41
MPI provides the routine MPI_Wtime() for returning
time (in seconds):
double start_time, end_time, exe_time;
start_time = MPI_Wtime();
.
.
end_time = MPI_Wtime();
exe_time = end_time - start_time;
2.42
Next topic
• Discussion of first assignment
– To write and execute some simple MPI
programs.
– Will include timing execution.

More Related Content

PPT
Memory management
PDF
Parallel Algorithms
PDF
Issues in the design of Code Generator
PPTX
Message passing in Distributed Computing Systems
PPT
Inter-Process communication in Operating System.ppt
PDF
P, NP, NP-Complete, and NP-Hard
PPTX
INTER PROCESS COMMUNICATION (IPC).pptx
PPT
Logical Clocks (Distributed computing)
Memory management
Parallel Algorithms
Issues in the design of Code Generator
Message passing in Distributed Computing Systems
Inter-Process communication in Operating System.ppt
P, NP, NP-Complete, and NP-Hard
INTER PROCESS COMMUNICATION (IPC).pptx
Logical Clocks (Distributed computing)

What's hot (20)

PDF
I.BEST FIRST SEARCH IN AI
PPT
Thrashing allocation frames.43
PPTX
Attributes of Output Primitives
PPTX
Multiprocessor system
PPTX
Predicate logic
DOCX
Operating System Process Synchronization
PPTX
Inter Process Communication
PPT
Network layer tanenbaum
PDF
CS9222 Advanced Operating System
PPTX
Dynamic interconnection networks
PDF
MPI Tutorial
PDF
Array Processor
PPTX
SYNCHRONIZATION IN MULTIPROCESSING
PPT
Introduction to MPI
PPTX
5. IO virtualization
PDF
Semaphores
PPT
Scheduling algorithms
PDF
Interconnection Network
PPTX
Process scheduling
PPTX
Parallel processing (simd and mimd)
I.BEST FIRST SEARCH IN AI
Thrashing allocation frames.43
Attributes of Output Primitives
Multiprocessor system
Predicate logic
Operating System Process Synchronization
Inter Process Communication
Network layer tanenbaum
CS9222 Advanced Operating System
Dynamic interconnection networks
MPI Tutorial
Array Processor
SYNCHRONIZATION IN MULTIPROCESSING
Introduction to MPI
5. IO virtualization
Semaphores
Scheduling algorithms
Interconnection Network
Process scheduling
Parallel processing (simd and mimd)
Ad

Viewers also liked (19)

PPT
What is [Open] MPI?
PDF
The State of libfabric in Open MPI
PPTX
PPT
Tutorial on Parallel Computing and Message Passing Model - C3
PPTX
Introduction to Parallel and Distributed Computing
PPTX
parallel language and compiler
PPTX
The Message Passing Interface (MPI) in Layman's Terms
PPT
Message passing interface
PPT
MPI Introduction
PPTX
INSTRUCTION LEVEL PARALLALISM
PDF
Intel® MPI Library e OpenMP* - Intel Software Conference 2013
PPTX
MPI message passing interface
PPTX
Instruction level parallelism
PPSX
Myocardial perfusion scintigraphy overview
PPTX
Myocardial perfusion imaging SPECT basics
PPT
Point-to-Point Communicationsin MPI
PPT
Chapter 6 pc
PDF
Business Intelligence Presentation (1/2)
PPTX
Business intelligence ppt
What is [Open] MPI?
The State of libfabric in Open MPI
Tutorial on Parallel Computing and Message Passing Model - C3
Introduction to Parallel and Distributed Computing
parallel language and compiler
The Message Passing Interface (MPI) in Layman's Terms
Message passing interface
MPI Introduction
INSTRUCTION LEVEL PARALLALISM
Intel® MPI Library e OpenMP* - Intel Software Conference 2013
MPI message passing interface
Instruction level parallelism
Myocardial perfusion scintigraphy overview
Myocardial perfusion imaging SPECT basics
Point-to-Point Communicationsin MPI
Chapter 6 pc
Business Intelligence Presentation (1/2)
Business intelligence ppt
Ad

Similar to Open MPI (20)

PPTX
25-MPI-OpenMP.pptx
ODP
Introduction to MPI
PPT
Parallel computing(2)
PPTX
Intro to MPI
PDF
PDF
iTop VPN Latest Version 2025 Crack Free Download
PPTX
VSO ConvertXto HD Free CRACKS Download .
PDF
Wondershare Filmora Crack Free Download
PDF
Minitool Partition Wizard Crack Free Download
PPTX
Nickelodeon All Star Brawl 2 v1.13 Free Download
PDF
LDPlayer 9.1.20 Latest Crack Free Download
PPTX
Mini Airways v0.11.3 TENOKE Free Download
PPTX
Replay Media Catcher Free CRACK Download
PDF
Tenorshare 4uKey Crack Fre e Download
PPTX
The Daum PotPlayer Free CRACK Download.
PPTX
CarX Street Deluxe edition v1.4.0 Free Download
PPTX
Horizon Zero Dawn Remastered Free Download For PC
PDF
Enscape Latest 2025 Crack Free Download
PPTX
Chimera Tool Crack 41.56.2046 Download 2025
PDF
Introduction to MPI
25-MPI-OpenMP.pptx
Introduction to MPI
Parallel computing(2)
Intro to MPI
iTop VPN Latest Version 2025 Crack Free Download
VSO ConvertXto HD Free CRACKS Download .
Wondershare Filmora Crack Free Download
Minitool Partition Wizard Crack Free Download
Nickelodeon All Star Brawl 2 v1.13 Free Download
LDPlayer 9.1.20 Latest Crack Free Download
Mini Airways v0.11.3 TENOKE Free Download
Replay Media Catcher Free CRACK Download
Tenorshare 4uKey Crack Fre e Download
The Daum PotPlayer Free CRACK Download.
CarX Street Deluxe edition v1.4.0 Free Download
Horizon Zero Dawn Remastered Free Download For PC
Enscape Latest 2025 Crack Free Download
Chimera Tool Crack 41.56.2046 Download 2025
Introduction to MPI

More from Anshul Sharma (12)

PPTX
Understanding concurrency
PPT
Interm codegen
PPT
Programming using Open Mp
PPT
Open MPI 2
PPT
Paralle programming 2
PPT
Parallel programming
PPT
PPT
PPT
Cuda intro
ODP
Intoduction to Linux
Understanding concurrency
Interm codegen
Programming using Open Mp
Open MPI 2
Paralle programming 2
Parallel programming
Cuda intro
Intoduction to Linux

Recently uploaded (20)

PPT
Overviiew on Intellectual property right
PDF
“Introduction to Designing with AI Agents,” a Presentation from Amazon Web Se...
PDF
Ebook - The Future of AI A Comprehensive Guide.pdf
PDF
Intravenous drug administration application for pediatric patients via augmen...
PDF
NewMind AI Journal Monthly Chronicles - August 2025
PDF
Examining Bias in AI Generated News Content.pdf
PDF
【AI論文解説】高速・高品質な生成を実現するFlow Map Models(Part 1~3)
PDF
Child-friendly e-learning for artificial intelligence education in Indonesia:...
PDF
Domain-specific knowledge and context in large language models: challenges, c...
PDF
Uncertainty-aware contextual multi-armed bandits for recommendations in e-com...
PDF
ment.tech-How to Develop an AI Agent Healthcare App like Sully AI (1).pdf
PDF
Addressing the challenges of harmonizing law and artificial intelligence tech...
PPTX
Report in SIP_Distance_Learning_Technology_Impact.pptx
PDF
Rooftops detection with YOLOv8 from aerial imagery and a brief review on roof...
PPTX
Introduction-to-Artificial-Intelligence (1).pptx
PPTX
Blending method and technology for hydrogen.pptx
PDF
Advancements in abstractive text summarization: a deep learning approach
PDF
State of AI in Business 2025 - MIT NANDA
PDF
Slides World Game (s) Great Redesign Eco Economic Epochs.pdf
PDF
Introduction to c language from lecture slides
Overviiew on Intellectual property right
“Introduction to Designing with AI Agents,” a Presentation from Amazon Web Se...
Ebook - The Future of AI A Comprehensive Guide.pdf
Intravenous drug administration application for pediatric patients via augmen...
NewMind AI Journal Monthly Chronicles - August 2025
Examining Bias in AI Generated News Content.pdf
【AI論文解説】高速・高品質な生成を実現するFlow Map Models(Part 1~3)
Child-friendly e-learning for artificial intelligence education in Indonesia:...
Domain-specific knowledge and context in large language models: challenges, c...
Uncertainty-aware contextual multi-armed bandits for recommendations in e-com...
ment.tech-How to Develop an AI Agent Healthcare App like Sully AI (1).pdf
Addressing the challenges of harmonizing law and artificial intelligence tech...
Report in SIP_Distance_Learning_Technology_Impact.pptx
Rooftops detection with YOLOv8 from aerial imagery and a brief review on roof...
Introduction-to-Artificial-Intelligence (1).pptx
Blending method and technology for hydrogen.pptx
Advancements in abstractive text summarization: a deep learning approach
State of AI in Business 2025 - MIT NANDA
Slides World Game (s) Great Redesign Eco Economic Epochs.pdf
Introduction to c language from lecture slides

Open MPI

  • 1. 2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, 2010. Aug 26, 2010.
  • 2. 2.2 Software Tools for Clusters Late 1980’s Parallel Virtual Machine (PVM) - developed Became very popular. Mid 1990’s - Message-Passing Interface (MPI) - standard defined. Based upon Message Passing Parallel Programming model. Both provide a set of user-level libraries for message passing. Use with sequential programming languages (C, C++, ...).
  • 3. 2.3 MPI (Message Passing Interface) • Message passing library standard developed by group of academics and industrial partners to foster more widespread use and portability. • Defines routines, not implementation. • Several free implementations exist.
  • 5. 2.5 Message routing between computers typically done by daemon processes installed on computers that form the “virtual machine”. Application daemon process program Workstation Application program Application program Workstation Workstation Messages sent through network (executable) (executable) (executable) . Can have more than one process running on each computer.
  • 6. 2.6 Message-Passing Programming using User-level Message-Passing Libraries Two primary mechanisms needed: 1. A method of creating processes for execution on different computers 2. A method of sending and receiving messages
  • 8. 2.8 Multiple program, multiple data (MPMD) model Source file Executable Processor 0 Processor p - 1 Compile to suit processor Source file • Different programs executed by each processor
  • 9. 2.9 Single Program Multiple Data (SPMD) model Source file Executables Processor 0 Processor p - 1 Compile to suit processor Basic MPI way • Same program executed by each processor • Control statements select different parts for each processor to execute.
  • 10. In MPI, processes within a defined communicating group given a number called a rank starting from zero onwards. Program uses control constructs, typically IF statements, to direct processes to perform specific actions. Example if (rank == 0) ... /* do this */; if (rank == 1) ... /* do this */; . . . 2.10
  • 11. Master-Slave approach Usually computation constructed as a master-slave model One process (the master), performs one set of actions and all the other processes (the slaves) perform identical actions although on different data, i.e. if (rank == 0) ... /* master do this */; else ... /* all slaves do this */; 2.11
  • 12. Static process creation • All executables started together. • Done when one starts the compiled programs. • Normal MPI way. 2.12
  • 13. 2.13 Multiple Program Multiple Data (MPMD) Model with Dynamic Process Creation Process 1 Process 2spawn(); Time Start execution of process 2 • One processor executes master process. • Other processes started from within master process Available in MPI-2 Might find applicability if do not initially how many processes needed. Does have a process creation overhead.
  • 14. Methods of sending and receiving messages 2.14
  • 15. 2.15 Basic “point-to-point” Send and Receive Routines Process 1 Process 2 send(&x, 2); recv(&y, 1); x y Movement of data Generic syntax (actual formats later) Passing a message between processes using send() and recv() library calls:
  • 16. 2.16 MPI point-to-point message passing using MPI_send() and MPI_recv() library calls
  • 17. Semantics of MPI_Send() and MPI_Recv() Called blocking, which means in MPI that routine waits until all its local actions have taken place before returning. After returning, any local variables used can be altered without affecting message transfer. MPI_Send() - Message may not reached its destination but process can continue in the knowledge that message safely on its way. MPI_Recv() – Returns when message received and data collected. Will cause process to stall until message received. Other versions of MPI_Send() and MPI_Recv() have different semantics. 2.17
  • 18. 2.18 Message Tag • Used to differentiate between different types of messages being sent. • Message tag is carried within message. • If special type matching is not required, a wild card message tag used. Then recv() will match with any send().
  • 19. 2.19 Message Tag Example Process 1 Process 2 send(&x,2, 5); recv(&y,1, 5); x y Movement of data Waits for a message from process 1 with a tag of 5 To send a message, x, with message tag 5 from a source process, 1, to a destination process, 2, and assign to y:
  • 20. 2.20 Unsafe message passing - Example lib() lib() send(…,1,…); recv(…,0,…); Process 0 Process 1 send(…,1,…); recv(…,0,…); (a) Intended behavior (b) Possible behavior lib() lib() send(…,1,…); recv(…,0,…); Process 0 Process 1 send(…,1,…); recv(…,0,…); Destination Source
  • 21. 2.21 MPI Solution “Communicators” • Defines a communication domain - a set of processes that are allowed to communicate between themselves. • Communication domains of libraries can be separated from that of a user program. • Used in all point-to-point and collective MPI message-passing communications.
  • 22. 2.22 Default Communicator MPI_COMM_WORLD • Exists as first communicator for all processes existing in the application. • A set of MPI routines exists for forming communicators. • Processes have a “rank” in a communicator.
  • 23. 2.23 Using SPMD Computational Model main (int argc, char *argv[]) { MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /*find rank */ if (myrank == 0) master(); else slave(); MPI_Finalize(); } where master() and slave() are to be executed by master process and slave process, respectively.
  • 24. 2.24 Parameters of blocking send MPI_Send(buf, count, datatype, dest, tag, comm) Address of Number of items Datatype of Rank of destination Message tag Communicator send buffer to send each item process
  • 25. 2.25 Parameters of blocking receive MPI_Recv(buf, count, datatype, src, tag, comm, status) Address of Maximum number Datatype of Rank of source Message tag Communicator receive buffer of items to receive each item process Status after operation
  • 26. 2.26 Example To send an integer x from process 0 to process 1, MPI_Comm_rank(MPI_COMM_WORLD,&myrank); /* find rank */ if (myrank == 0) { int x; MPI_Send(&x, 1, MPI_INT, 1, msgtag, MPI_COMM_WORLD); } else if (myrank == 1) { int x; MPI_Recv(&x, 1, MPI_INT, 0,msgtag,MPI_COMM_WORLD,status); }
  • 27. Sample MPI Hello World program #include <stddef.h> #include <stdlib.h> #include "mpi.h" main(int argc, char **argv ) { char message[20]; int i,rank, size, type=99; MPI_Status status; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD,&size); MPI_Comm_rank(MPI_COMM_WORLD,&rank); if(rank == 0) { strcpy(message, "Hello, world"); for (i=1; i<size; i++) MPI_Send(message,13,MPI_CHAR,i,type,MPI_COMM_WORLD); } else MPI_Recv(message,20,MPI_CHAR,0,type,MPI_COMM_WORLD,&status); printf( "Message from process =%d : %.13sn", rank,message); MPI_Finalize(); } 2.27
  • 28. Program sends message “Hello World” from master process (rank = 0) to each of the other processes (rank != 0). Then, all processes execute a println statement. In MPI, standard output automatically redirected from remote computers to the user’s console so final result will be Message from process =1 : Hello, world Message from process =0 : Hello, world Message from process =2 : Hello, world Message from process =3 : Hello, world ... except that the order of messages might be different but is unlikely to be in ascending order of process ID; it will depend upon how the processes are scheduled. 2.28
  • 29. Setting Up the Message Passing Environment Usually computers specified in a file, called a hostfile or machines file. File contains names of computers and possibly number of processes that should run on each computer. Implementation-specific algorithm selects computers from list to run user programs. 2.29
  • 30. Users may create their own machines file for their program. Example coit-grid01.uncc.edu coit-grid02.uncc.edu coit-grid03.uncc.edu coit-grid04.uncc.edu coit-grid05.uncc.edu If a machines file not specified, a default machines file used or it may be that program will only run on a single computer. 2.30
  • 31. 2.31 Compiling/Executing MPI Programs • Minor differences in the command lines required depending upon MPI implementation. • For the assignments, we will use MPICH or MPICH-2. • Generally, a machines file need to be present that lists all the computers to be used. MPI then uses those computers listed. Otherwise it will simply run on one computer
  • 32. 2.32 MPICH and MPICH-2 • Both Windows and Linux versions • Very easy to install on a Windows system.
  • 33. 2.33 MPICH Commands Two basic commands: • mpicc, a script to compile MPI programs • mpirun, the original command to execute an MPI program, or • mpiexec - MPI-2 standard command mpiexec replaces mpirun although mpirun still exists.
  • 34. 2.34 Compiling/executing (SPMD) MPI program For MPICH. At a command line: To start MPI: Nothing special. To compile MPI programs: for C mpicc -o prog prog.c for C++ mpiCC -o prog prog.cpp To execute MPI program: mpiexec -n no_procs prog or mpirun -np no_procs prog A positive integer
  • 35. 2.35 Executing MPICH program on multiple computers Create a file called say “machines” containing the list of machines, say: coit-grid01.uncc.edu coit-grid02.uncc.edu coit-grid03.uncc.edu coit-grid04.uncc.edu coit-grid05.uncc.edu
  • 36. 2.36 mpiexec -machinefile machines -n 4 prog would run prog with four processes. Each processes would execute on one of machines in list. MPI would cycle through list of machines giving processes to machines. Can also specify number of processes on a particular machine by adding that number after machine name.)
  • 38. 2.38 Visualization Tools Programs can be watched as they are executed in a space-time diagram (or process-time diagram): Process 1 Process 2 Process 3 TimeComputing Waiting Message-passing system routine Message
  • 39. 2.39 Implementations of visualization tools are available for MPI. An example is the Upshot program visualization system.
  • 40. 2.40 Evaluating Programs Empirically Measuring Execution Time To measure execution time between point L1 and point L2 in code, might have construction such as: . L1: time(&t1); /* start timer */ . . L2: time(&t2); /* stop timer */ . elapsed_Time = difftime(t2, t1); /*time=t2-t1*/ printf(“Elapsed time=%5.2f secs”,elapsed_Time);
  • 41. 2.41 MPI provides the routine MPI_Wtime() for returning time (in seconds): double start_time, end_time, exe_time; start_time = MPI_Wtime(); . . end_time = MPI_Wtime(); exe_time = end_time - start_time;
  • 42. 2.42 Next topic • Discussion of first assignment – To write and execute some simple MPI programs. – Will include timing execution.