slides2 (4)

slides2 (4) - Message-Passing Computing ITCS 4/5145...

Info iconThis preview shows pages 1–12. Sign up to view the full content.

View Full Document Right Arrow Icon
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, 2010. Aug 26, 2010.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2.2 Software Tools for Clusters Late 1980’s Parallel Virtual Machine (PVM) - developed Became very popular. Mid 1990’s - Message-Passing Interface (MPI) - standard defined. Based upon Message Passing Parallel Programming model. Both provide a set of user-level libraries for message passing. Use with sequential programming languages (C, C++, . ..).
Background image of page 2
2.3 MPI (Message Passing Interface) Message passing library standard developed by group of academics and industrial partners to foster more widespread use and portability. Defines routines, not implementation. Several free implementations exist.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2.4 Message passing concept using library routines
Background image of page 4
2.5 Message routing between computers typically done by daemon processes installed on computers that form the “virtual machine”. Application daemon process program Workstation Application program Application program Workstation Workstation Messages sent through network (executable) (executable) (executable) . Can have more than one process running on each computer.
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2.6 Message-Passing Programming using User-level Message-Passing Libraries Two primary mechanisms needed: 1. A method of creating processes for execution on different computers 2. A method of sending and receiving messages
Background image of page 6
Creating processes on different computers 2.7
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2.8 Multiple program, multiple data (MPMD) model Source file Executable Processor 0 Processor p - 1 Compile to suit processor Source file Different programs executed by each processor
Background image of page 8
2.9 Single Program Multiple Data (SPMD) model Source file Executables Processor 0 Processor p - 1 Compile to suit processor Basic MPI way Same program executed by each processor Control statements select different parts for each processor to execute.
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
In MPI, processes within a defined communicating group given a number called a rank starting from zero onwards. Program uses control constructs, typically IF statements, to direct processes to perform specific actions. Example if (rank == 0) . .. /* do this */; if (rank == 1) . .. /* do this */; . . 2.10
Background image of page 10
Usually computation constructed as a master-slave model One process (the master), performs one set of actions and all the other processes (the slaves) perform identical actions although on different data, i.e. if (rank == 0) .
Background image of page 11

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 12
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 42

slides2 (4) - Message-Passing Computing ITCS 4/5145...

This preview shows document pages 1 - 12. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online