Lec11-MPI-12 - Summary of previous lecture MPI as a...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
1 Parallel Programming and MPI- Lecture 2 Abhik Roychoudhury CS 3211 National University of Singapore CS3211 2009-10 by Abhik Roychoudhury 1 Sample material: Parallel Programming by Lin and Snyder, Chapter 7. Summary of previous lecture ` MPI as a programming interface ` Message passing communication ` Communicating sequential processes ` Entering and Exiting MPI ` MPI_Init, MPI_Finalize ` Point-to-point communication ` ` MPI_Send, MPI_Recv, MPI_Isend, MPI_Irecv ` Wait and test operations to complete communication. CS3211 2009-10 by Abhik Roychoudhury 2 In today’s lecture ` Collective communication ` Communicate between multiple processes simultaneously. ` Substantially differs from send-receive based point-to-point communication studied earlier. ` What are the communication primitives? CS3211 2009-10 by Abhik Roychoudhury 3 Collective communication in MPI ` Barrier communication across a set of processes. ` Global communication functions ` Broadcast to a set of processes. ` Gather data from all members for a member. ` Scatter data to all members ` Global reduction operations ` Possible reduction functions include sum, max, min etc ` Accumulating return values from a set of processes, and employ a reduction function to obtain a result. ` Result may be returned to all members, or only to a selected process. CS3211 2009-10 by Abhik Roychoudhury 4 Collective communication features ` In MPI, they have the following features ` Amount of data sent must exactly match the amount of data specified by receiver. ` No message tags are used. ` Only blocking communication is allowed. CS3211 2009-10 by Abhik Roychoudhury 5 Communicators ` A scoping mechanism to define a set of processes, communicating with each other. ` e.g. define a separate communicator for libraries, to keep messages from library routines distinct from appl. level routines. ` A group of processes assigned with a globally unique id A of processes, assigned with a globally unique id. ` A group is an ordered set of processes. ` Each process in the group has a unique rank. ` Previous lecture! ` A process can, of course, belong to multiple groups. ` We can assume that the communicators we deal with, have its own group as well. CS3211 2009-10 by Abhik Roychoudhury 6
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 Barrier synchronization ` int MPI_Barrier(MPI_Comm comm) ` Blocks the caller, until all group members have called it. ` Returns at any process, only after all group members have entered the call. CS3211 2009-10 by Abhik Roychoudhury 7 Global communication ` Broadcast ` Scatter ` Gather ` Allgather ` CS3211 2009-10 by Abhik Roychoudhury 8 Broadcast ` Int MPI_Bcast(buffer, count, datatype, root, comm) ` Starting address of buffer ` # of entries in buffer ` Data type of buffer ` Rank of the broadcasting process ` The communicator capturing the group of processes. ` Example:
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 8

Lec11-MPI-12 - Summary of previous lecture MPI as a...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online