slides2a (2)

slides2a (2) - Message-Passing Computing More MPI routines:...

Info iconThis preview shows pages 1–11. Sign up to view the full content.

View Full Document Right Arrow Icon
2a.1 Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, 2009.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2a.2 Collective message-passing routines Have routines that send message(s) to a group of processes or receive message(s) from a group of processes Higher efficiency than separate point-to-point routines although routines not absolutely necessary.
Background image of page 2
2a.3 Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: MPI_Bcast() - Broadcast from root to all other processes MPI_Gather() - Gather values for group of processes MPI_Scatter() - Scatters buffer in parts to group of processes MPI_Alltoall() - Sends data from all processes to all processes MPI_Reduce() - Combine values on all processes to single value MPI_Reduce_scatter() - Combine values and scatter results MPI_Scan() - Compute prefix reductions of data on processes MPI_Barrier() - A means of synchronizing processes by stopping each one until they all have reached a specific “barrier” call.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
MPI broadcast operation 2a.4 Sending same message to all processes in communicator. Multicast - sending same message to defined group of processes.
Background image of page 4
MPI_Bcast parameters 2a.5
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Basic MPI scatter operation 2a.6 Sending each element of an array in root process to a separate process. Contents of i th location of array sent to i th process.
Background image of page 6
MPI scatter parameters 2a.7
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Simplest scatter would be as illustrated which one element of an array is sent to different processes. Extension provided in the MPI_Scatter() routine is to send a fixed number of contiguous elements to each process. 2a.8
Background image of page 8
Scattering contiguous groups of elements to each process 2a.9
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Example In the following code, size of send buffer is given by 100 * <number of processes> and 100 contiguous elements are send to each process: main (int argc, char *argv[]) { int size, *sendbuf, recvbuf[100]; /* for each process */ MPI_Init(&argc, &argv); /* initialize MPI */ MPI_Comm_size(MPI_COMM_WORLD, &size); sendbuf = (int *)malloc(size*100*sizeof(int));
Background image of page 10
Image of page 11
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 31

slides2a (2) - Message-Passing Computing More MPI routines:...

This preview shows document pages 1 - 11. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online