08-mpi-1 - Announcements Classschedule ActionItems...

Info iconThis preview shows pages 1–10. Sign up to view the full content.

View Full Document Right Arrow Icon
Announcements Class schedule Feedback on your ‘thoughts on term project’ coming soon HW #3 is posted on T Square under Tests+Quizzes Action Items Term project pre proposals will be due in early October Programming HW coming soon Computational Science and Engineering Division 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
CS6230–HPC Tools and Applications Distributed Memory Parallelism with MPI Jeffrey S. Vetter Computational Science and Engineering College of Computing Georgia Institute of Technology http://ft.ornl.gov/~vetter vetter@computer.org
Background image of page 2
Thinking about Parallelism vetter@co mputer.or 2 • Assembler •SIMD, AVX • Compiler •Libraries, Frameworks Core • Threads–Pthreads, OpenMP • Distributed memory model like MPI, or GAS Languages •Libraries, Frameworks Socket: Multicore • Threads–Pthreads, OpenMP • Distributed memory model like MPI, or GAS Languages Memory Thread affinity becomes much more important •Libraries, Frameworks Node • Distributed memory model like MPI, GAS Languages •Libraries, Frameworks System
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Parallel Programming Overview
Background image of page 4
vetter@computer.org 4 Programming Models Virtually all HPC applications are written in the Message Passing Interface programming model Snir, M., W. D. Gropp, et al., Eds. (1998). MPI ‐‐ the complete reference (2 volume set) . Scientific and engineering computation. Cambridge, Mass., MIT Press. http://www unix.mcs.anl.gov/mpi/ Google: Message Passing Interface Shared memory parallelism OpenMP www.openmp.org Pthreads http://www.llnl.gov/computing/tutorials/pthreads/
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Message Passing Interface MPI Thanks to a lot of sources on the WWW!!
Background image of page 6
Types of Parallel Computing Models Data Parallel same instructions are carried out simultaneously on multiple data items (SIMD) Task Parallel different instructions on different data (MIMD) SPMD (single program, multiple data) not synchronized at individual operation level SPMD is equivalent to MIMD since each MIMD program can be made SPMD vetter@computer.org 6
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
The Message Passing Model A process is (traditionally) a program counter and address space Processes may have multiple threads (program counters and associated stacks) sharing a single address space MPI is for communication among processes, which have separate address spaces. Interprocess communication consists of Synchronization Movement of data from one process’s address space to another’s vetter@computer.org 7
Background image of page 8
Message Passing Interface Distributed memory programming model Assumes that individual tasks do not share memory Applications must explicitly transfer data MPI is a specification
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 10
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 11/04/2010 for the course CSE 6530 taught by Professor Jeffreyvetter during the Fall '10 term at Georgia Institute of Technology.

Page1 / 29

08-mpi-1 - Announcements Classschedule ActionItems...

This preview shows document pages 1 - 10. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online