{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

03_programming - Parallel Programming Paradigms MPI...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon
Parallel Programming Paradigms MPI — Message-Passing Interface OpenMP — Portable Shared Memory Programming Parallel Numerical Algorithms Chapter 3 – Parallel Programming Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign CSE 512 / CS 554 Michael T. Heath Parallel Numerical Algorithms 1 / 64
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Parallel Programming Paradigms MPI — Message-Passing Interface OpenMP — Portable Shared Memory Programming Outline 1 Parallel Programming Paradigms 2 MPI — Message-Passing Interface MPI Basics Communication and Communicators Virtual Process Topologies Performance Monitoring and Visualization 3 OpenMP — Portable Shared Memory Programming Michael T. Heath Parallel Numerical Algorithms 2 / 64
Background image of page 2
Parallel Programming Paradigms MPI — Message-Passing Interface OpenMP — Portable Shared Memory Programming Parallel Programming Paradigms Functional languages Parallelizing compilers Object parallel Data parallel Shared memory Remote memory access Message passing Michael T. Heath Parallel Numerical Algorithms 3 / 64
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Parallel Programming Paradigms MPI — Message-Passing Interface OpenMP — Portable Shared Memory Programming Functional Languages Express what to compute (i.e., mathematical relationships to be satisfied), but not how to compute it or order in which computations are to be performed Avoid artificial serialization imposed by imperative programming languages Avoid storage references, side effects, and aliasing that make parallelization difficult Permit full exploitation of any parallelism inherent in computation Michael T. Heath Parallel Numerical Algorithms 4 / 64
Background image of page 4
Parallel Programming Paradigms MPI — Message-Passing Interface OpenMP — Portable Shared Memory Programming Functional Languages Often implemented using dataflow , in which operations fire whenever their inputs are available, and results then become available as inputs for other operations Tend to require substantial extra overhead in work and storage, so have proven difficult to implement efficiently Have not been used widely in practice, though numerous experimental functional languages and dataflow systems have been developed Michael T. Heath Parallel Numerical Algorithms 5 / 64
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Parallel Programming Paradigms MPI — Message-Passing Interface OpenMP — Portable Shared Memory Programming Parallelizing Compilers Automatically parallelize programs written in conventional sequential programming languages Difficult to do for arbitrary serial code Compiler can analyze serial loops for potential parallel execution, based on careful dependence analysis of variables occurring in loop User may provide hints ( directives ) to help compiler determine when loops can be parallelized and how OpenMP is standard for compiler directives Michael T. Heath Parallel Numerical Algorithms 6 / 64
Background image of page 6
Parallel Programming Paradigms MPI — Message-Passing Interface OpenMP — Portable Shared Memory Programming Parallelizing Compilers Automatic or semi-automatic, loop-based approach has
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 8
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}