Message Passing Interface: Parallel Processing Course University of Tehran
Message Passing Interface: Parallel Processing Course University of Tehran
Message Passing Interface: Parallel Processing Course University of Tehran
Explicit Parallelism
Same thing as multithreading for shared memory.
passing.
User has explicit control over processes. Good: control can be used to performance benefit.
memN
procN
network
any hardware)
Message passing is used for communication among processes.
Inter-process communication:
Type:
Synchronous / Asynchronous Movement of data from one processs address space to anothers
Still need to do synchronization? Sometimes, but many times goes hand in hand with data communication.
various flavors) to distribute and communicate data. Provide additional synchronization facilities.
MPI Services
Hide details of architecture
MPI Basics
Starting and Finishing
Identifying yourself
Sending and Receiving messages Communicator Collection of processes Determines scope to which messages are relative identity of process (rank) is relative to communicator scope of global communications (broadcast, etc.)
code
MPI_Init(&argc, &argv);
MPI Messages
Message content, a sequence of bytes
Letter Address Return Address Type of Mailing (class) Letter Weight Country Magazine
buf: address of receive buffer count: size of receive buffer in elements datatype: data type of receive buffer elements source: source process id or MPI_ANY_SOURCE tag and comm: ignore for now status: status object
Data Types
The data message which is sent or received is
described by a triple (address, count, datatype). The following data types are supported by MPI:
Predefined data types that are corresponding to data
types from the programming language. Arrays. Sub blocks of a matrix User defined data structure. A set of predefined data types
Because communications take place between heterogeneous machines. Which may have different data representation and length in the memory.
Output
> mpirun -np 4 ./helloworld Hello from 1 Hello from 2 Hello from 3
Point-to-Point communications
A synchronous communication does not complete until the message has been received.
Non-blocking operations
Non blocking communication allows useful work to be performed while waiting for the communication to complete
Collective communications
Broadcast
A broadcast sends a message to a number of recipients
Barrier
A barrier operation synchronises a number of processors.
Reduction operations
Reduction operations reduce data from a number of processors to a single item.
o MPI_Reduce combines data from all processes in communicator or and returns it to one process
Syntax: MPI_Reduce(void *message, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm)
o In many numerical algorithm, send/receive can be replaced by Bcast/Reduce, improving both simplicity and efficiency.
MPI_Gather()
MPI_Scatter() MPI_Alltoall()
MPI_Reduce()
MPI_Bcast()
Broadcasting a message
Broadcast: one sender, many receivers Includes all processes in communicator, all processes must
make an equivalent call to MPI_Bcast Any processor may be sender (root), as determined by the fourth parameter First three parameters specify message as for MPI_Send and MPI_Recv, fifth parameter specifies communicator Broadcast serves as a global synchronization
MPI_Bcast() Syntax
MPI_Bcast(mess, count, MPI_INT, root, MPI_COMM_WORLD); mess pointer to message buffer count number of items sent MPI_INT type of item sent Note: count and type should be the same on all processors root sending processor MPI_COMM_WORLD communicator within which broadcast takes place
Examine add.c
Edit add_mpi.c
or
Edit add_mpi.c
Edit add_mpi.c
Edit add_mpi.c
Edit add_mpi.c
Edit add_mpi.c
Edit add_mpi.c
mpicc o pi pi.c
Or
mpic++ o pi pi.cpp
mpirun np # of procs machinefile XXX pi -machinefile tells MPI to run the program on the
machines of XXX.
to build, compile, run, and analyze performance Others: LAM MPI, OpenMPI, vendor X MPI
MPI Sources
Standard: http://www.mpi-forum.org
Books: Using MPI: Portable Parallel Programming with the MessagePassing Interface, by Gropp, Lusk, and Skjellum, MIT Press, 1994. MPI: The Complete Reference, by Snir, Otto, Huss-Lederman, Walker, and Dongarra, MIT Press, 1996. Designing and Building Parallel Programs, by Ian Foster, AddisonWesley, 1995. Parallel Programming with MPI, by Peter Pacheco, MorganKaufmann, 1997. MPI: The Complete Reference Vol 1 and 2,MIT Press, 1998(Fall).
Other information on Web http://www.mcs.anl.gov/mpi