MPIAll - Introduction to MPI Monday, January 30, 12 Topics...

Info iconThis preview shows pages 1–10. Sign up to view the full content.

View Full Document Right Arrow Icon
Introduction to MPI Monday, January 30, 12
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Topics to be covered MPI vs shared memory Initializing MPI MPI concepts -- communicators, processes, ranks MPI functions to manipulate these Timing functions Barriers and the reduction collective operation Monday, January 30, 12
Background image of page 2
Shared and distributed memory Shared memory automatically maintained a consistent image of memory according to some memory model Fne grained communication possible via loads, stores, and cache coherence model and multicore hardware support well aligned Programs can be converted piece-wise Distributed memory Program executes as a collection of processes, all communication between processors explicitly speciFed by the programmer ±ine grained communication in general too expensive -- programmer must aggregate communication Conversion of programs is all-or-nothing Cost scaling of machines is better than with shared memory -- well aligned with economics of commodity rack mounted blades Monday, January 30, 12
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Message Passing network - ethernet or proprietary (vendor specifc, infnitband, etc.) processor memory processor memory processor memory processor memory processor memory processor memory processor memory processor memory Monday, January 30, 12
Background image of page 4
Message Passing Model network - ethernet or proprietary (vendor specifc, infnitband, etc.) processor memory processor memory processor memory processor memory processor memory processor memory processor memory processor memory • This drawing implies that all processor are equidistant from one another • This is often not the case -- the network topology and multicores make some processors closer than others • programmers have to exploit this manually Monday, January 30, 12
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Message Passing Model • In reality, processes run on cores, and are closer to other processes on the same processor • Across processors, some can be reached via a single hop on the network, others require multiple hops • Not a big issue on small (several hundred processors), but it needs to be considered on large machines. network P M P M P M P M network P M P M P M P M network P M network Monday, January 30, 12
Background image of page 6
131,072 cores BG/L Monday, January 30, 12
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Cray Jaguar (Opteron cores) 224,256 processing cores, 18,868 nodes, 300TB of memory 2.3 petaflop/s (2.3 quadrillion floating point operations per second 6950 KW power Monday, January 30, 12
Background image of page 8
Why use message passing Allows control over data layout, locality and communication -- very important on large machines Portable across all machines including shared memory machines -- it’s a universal parallel programming model Easier to write deterministic programs simplifes debugging easier to understand programs Style needed For eFfcient messages can lead to better perFormance than shared memory programs, even on shared memory systems.
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 10
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 02/19/2012 for the course ECE 563 taught by Professor Staff during the Spring '08 term at Purdue University-West Lafayette.

Page1 / 108

MPIAll - Introduction to MPI Monday, January 30, 12 Topics...

This preview shows document pages 1 - 10. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online