03_programming

03_programming - Parallel Programming Paradigms MPI...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Parallel Programming Paradigms MPI Message-Passing Interface OpenMP Portable Shared Memory Programming Parallel Numerical Algorithms Chapter 3 Parallel Programming Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign CSE 512 / CS 554 Michael T. Heath Parallel Numerical Algorithms 1 / 64 Parallel Programming Paradigms MPI Message-Passing Interface OpenMP Portable Shared Memory Programming Outline 1 Parallel Programming Paradigms 2 MPI Message-Passing Interface MPI Basics Communication and Communicators Virtual Process Topologies Performance Monitoring and Visualization 3 OpenMP Portable Shared Memory Programming Michael T. Heath Parallel Numerical Algorithms 2 / 64 Parallel Programming Paradigms MPI Message-Passing Interface OpenMP Portable Shared Memory Programming Parallel Programming Paradigms Functional languages Parallelizing compilers Object parallel Data parallel Shared memory Remote memory access Message passing Michael T. Heath Parallel Numerical Algorithms 3 / 64 Parallel Programming Paradigms MPI Message-Passing Interface OpenMP Portable Shared Memory Programming Functional Languages Express what to compute (i.e., mathematical relationships to be satisfied), but not how to compute it or order in which computations are to be performed Avoid artificial serialization imposed by imperative programming languages Avoid storage references, side effects, and aliasing that make parallelization difficult Permit full exploitation of any parallelism inherent in computation Michael T. Heath Parallel Numerical Algorithms 4 / 64 Parallel Programming Paradigms MPI Message-Passing Interface OpenMP Portable Shared Memory Programming Functional Languages Often implemented using dataflow , in which operations fire whenever their inputs are available, and results then become available as inputs for other operations Tend to require substantial extra overhead in work and storage, so have proven difficult to implement efficiently Have not been used widely in practice, though numerous experimental functional languages and dataflow systems have been developed Michael T. Heath Parallel Numerical Algorithms 5 / 64 Parallel Programming Paradigms MPI Message-Passing Interface OpenMP Portable Shared Memory Programming Parallelizing Compilers Automatically parallelize programs written in conventional sequential programming languages Difficult to do for arbitrary serial code Compiler can analyze serial loops for potential parallel execution, based on careful dependence analysis of variables occurring in loop User may provide hints ( directives ) to help compiler determine when loops can be parallelized and how OpenMP is standard for compiler directives Michael T. Heath Parallel Numerical Algorithms 6 / 64 Parallel Programming Paradigms MPI Message-Passing Interface OpenMP Portable Shared Memory Programming Parallelizing Compilers Automatic or semi-automatic, loop-based approach has...
View Full Document

Page1 / 64

03_programming - Parallel Programming Paradigms MPI...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online