Parallel Programming and MPI

Parallel Programming and MPI - Parallel Programming andMPI...

Info iconThis preview shows pages 1–12. Sign up to view the full content.

View Full Document Right Arrow Icon
© 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Parallel  Programming  and MPI A course for IIT-M. September 2008 R Badrinath, STSD Bangalore (ramamurthy.badrinath@hp.com)
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 September 2008 IIT-Madras Context and Background IIT- Madras has recently added a good deal of compute power. Why – Further R&D in sciences, engineering Provide computing services to the region Create new opportunities in education and skills Why this course – Update skills to program modern cluster computers Length -2 theory and 2 practice sessions, 4 hrs each
Background image of page 2
3 Audience Check Audience Check
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
4 September 2008 IIT-Madras Contents 1. MPI_Init 2. MPI_Comm_rank 3. MPI_Comm_size 4. MPI_Send 5. MPI_Recv 6. MPI_Bcast 7. MPI_Create_comm 8. MPI_Sendrecv 9. MPI_Scatter 10. MPI_Gather … … … … … … Instead we Understand Issues Understand Concepts Learn enough to pickup from the manual Go by motivating examples Try out some of the examples
Background image of page 4
5 September 2008 IIT-Madras Outline Sequential vs Parallel programming Shared vs Distributed Memory Parallel work breakdown models Communication vs Computation MPI Examples MPI Concepts The role of IO
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
6 September 2008 IIT-Madras Sequential vs Parallel We are used to sequential programming – C, Java, C++, etc. E.g., Bubble Sort, Binary Search, Strassen Multiplication, FFT, BLAST, … Main idea – Specify the steps in perfect order Reality – We are used to parallelism a lot more than we think – as a concept; not for programming Methodology – Launch a set of tasks; communicate to make progress. E.g., Sorting 500 answer papers by – making 5 equal piles, have them sorted by 5 people, merge them together.
Background image of page 6
7 September 2008 IIT-Madras Shared Memory – All tasks access the same memory, hence the same data. pthreads Distributed Memory – All memory is local. Data sharing is by explicitly transporting data from one task to another ( send - receive pairs in MPI, e.g.) HW – Programming model relationship – Tasks vs CPUs; SMPs vs Clusters Shared vs Distributed Memory Programming Program Memory Communications channel
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
8 Designing Parallel Programs
Background image of page 8
9 September 2008 IIT-Madras Simple Parallel Program – sorting numbers in a large array A Notionally divide A into 5 pieces [0. .99;100. .199;200. .299;300. .399;400. .499]. Each part is sorted by an independent sequential algorithm and left within its region. The resultant parts are merged by simply reordering among adjacent parts.
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
10 September 2008 IIT-Madras What is different – Think about… How many people doing the work. (Degree of Parallelism) What is needed to begin the work. (Initialization) Who does what. (Work distribution) Access to work part. (Data/IO access) Whether they need info from each other to finish their own job. (Communication) When are they all done. (Synchronization) What needs to be done to collate the result.
Background image of page 10
11 September 2008 IIT-Madras Work Break-down
Background image of page 11

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 12
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 03/23/2010 for the course CSE 4018 taught by Professor Angadsing during the Spring '10 term at Punjab Engineering College.

Page1 / 54

Parallel Programming and MPI - Parallel Programming andMPI...

This preview shows document pages 1 - 12. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online