lect06-omp-mpi-titanium

lect06-omp-mpi-titanium - Notes ! CMSC 714 Lecture 6 MPI...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
CMSC 714 Lecture 6 MPI vs. OpenMP and Titanium Alan Sussman 2 CMSC 714 - Alan Sussman & Jeffrey K. Hollingsworth Notes ! MPI project due next Wed., 6PM – Questions on debugging MPI programs? – ask Dr. Sussman via email ! OpenMP project posted after MPI project turned in 3 CMSC 714 - Alan Sussman & Jeffrey K. Hollingsworth OpenMP + MPI ! Some applications can take advantage of both message passing and threads – Questions is what to do to obtain best overall performance, without too much programming difficulty – Choices are all MPI, all OpenMP, or both • For both , common option is outer loop parallelized with message passing, inner loop with directives to generate threads ! Applications studied: – Hydrology – CGWAVE – Computational chemistry – GAMESS – Linear algebra – matrix multiplication and QR factorization – Seismic processing – SPECseis95 – Computational fluid dynamics – TLNS3D – Computational physics - CRETIN 4 CMSC 714 - Alan Sussman & Jeffrey K. Hollingsworth Types of parallelism in the codes ! For message passing parallelism (MPI) – Parametric – coarse-grained outer loop, essentially task parallel – Structured domains – domain decomposition with local operations – structured and unstructured grids – Direct solvers – linear algebra, lots of communication and load balancing required – message passing works well for large systems of equations ! Shared memory parallelism (OpenMP) – Statically scheduled parallel loops – one large, or several smaller loops, non-nested parallel – Parallel regions – merge loops into one parallel region to reduce overhead of directives – Dynamic load balanced – when static scheduling leads to load imbalance from irregular task sizes
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
5 CMSC 714 - Alan Sussman & Jeffrey K. Hollingsworth CGWAVE ! Finite elements - MPI parameter space evaluation at outer loop, OpenMP sparse linear equation solver in inner loops ! Speedup using 2 levels of parallelism allows modeling larger bodies of water possible in a reasonable amount of time ! Master-worker strategy for dynamic load balancing in MPI part/ component ! Solver for each component solves large sparse linear system with OpenMP to parallelize ! On SGI Origin 2000 (distributed shared memory machine), use first touch rule to migrate data for each component to the processor that uses it ! Performance results show that best performance obtained using both MPI and OpenMP, with a combination of MPI workers and OpenMP threads that depends on the problem/grid size And for load balancing, a lot fewer MPI workers than components 6 CMSC 714 - Alan Sussman & Jeffrey K. Hollingsworth GAMESS ! Computational chemistry – molecular dynamics –
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 5

lect06-omp-mpi-titanium - Notes ! CMSC 714 Lecture 6 MPI...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online