MPITut - SP Parallel Programming Workshop message passing...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
SP Parallel Programming Workshop m e s s a g e p a s s i n g i n t e r f a c e ( m p i ) Table of Contents 1. The Message Passing Paradigm 2. What Is MPI? 3. Getting Started 4. Environment Management Routines 5. Point to Point Communication Routines 1. MPI Message Passing Routine Arguments 2. Blocking Message Passing Routines 3. Non-Blocking Message Passing Routines 6. Collective Communication Routines 7. Derived Data Types 8. Group and Communicator Management Routines 9. Virtual Topologies 10. A Look Into the Future: MPI-2 11. References and More Information 12. Appendix A: MPI Routine Index 13. Exercise The Message Passing Paradigm Note: This section can be skipped if the reader is already familiar with message passing concepts. Distributed Memory 1 of 60
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Every processor has its own local memory which can be accessed directly only by its own CPU. Transfer of data from one processor to another is performed over a network. Differs from shared memory systems which permit multiple processors to directly access the same memory resource via a memory bus. Message Passing The method by which data from one processor’s memory is copied to the memory of another processor. In distributed memory systems, data is generally sent as packets of information over a network from one processor to another. A message may consist of one or more packets, and usually includes routing and/or other control information. Process A process is a set of executable instructions (program) which runs on a processor. One or more processes may execute on a processor. In a message passing system, all processes communicate with each other by sending messages - even if they are running on the same processor. For reasons of efficiency, however, message passing systems generally associate only one process per processor. Message Passing Library 2 of 60
Background image of page 2
application code to accomplish send, receive and other message passing operations. Send / Receive Message passing involves the transfer of data from one process (send) to another process (receive). Requires the cooperation of both the sending and receiving process. Send operations usually require the sending process to specify the data’s location, size, type and the destination. Receive operations should match a corresponding send operation. Synchronous / Asynchronous A synchronous send operation will complete only after acknowledgement that the message was safely received by the receiving process. Asynchronous send operations may "complete" even though the receiving process has not actually received the message. Application Buffer The address space that holds the data which is to be sent or received. For example, your program uses a variable called, "inmsg". The application buffer for inmsg is the program memory location where the value of inmsg resides. System Buffer
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 60

MPITut - SP Parallel Programming Workshop message passing...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online