Lec10-MPI-1 - Concurrency and Parallelism Threads A...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
1 Parallel Programming and MPI- Lecture 1 Abhik Roychoudhury CS 3211 National University of Singapore CS3211 2009-10 by Abhik Roychoudhury 1 Sample material: Parallel Programming by Lin and Snyder, Chapter 7. Made available via IVLE reading list, accessible from Lesson Plan. Concurrency and Parallelism A Tim B C Threads CS3211 2009-10 by Abhik Roychoudhury 2 Time A Time B C Processors Why parallel programming? ` Performance, performance, performance! ` Increasing advent of multi-core machines!! ` Homogeneous multi-processing architectures. ` Discussed further in a later lecture. ` Parallelizing compilers never worked! ` Automatically extracting parallelism from app. is very hard ` Better for the programmer to indicate which parts of the program to execute in parallel and how. CS3211 2009-10 by Abhik Roychoudhury 3 How to program for parallel machines? ` Use a parallelizing compiler ` Programmer does nothing, too ambitious ! ` Extend a sequential programming language ` Libraries for creation, termination, synchronization and communication between parallel processes. ` The base language and its compiler can be used. ` Message Passing Interface (MPI) is one example. ` Design a parallel programming language ` Develop a new language – Occam. ` Or add parallel constructs to a base language – High Perf. Fortran. ` Must beat programmer resistance, and develop new compilers. CS3211 2009-10 by Abhik Roychoudhury 4 Parallel Programming Models ` Message Passing ` MPI: Message Passing Interface ` PVM: Parallel Virtual Machine ` HPF: High Performance Fortran ` Shared Memory 5 ` Automatic Parallelization ` POSIX Threads (Pthreads) ` OpenMP: Compiler directives CS3211 2009-10 by Abhik Roychoudhury The Message-Passing Model ` A process is (traditionally) a program counter and address space ` Processes may have multiple threads (program counters and associated stacks) sharing a single address space. MPI is for communication among processes, which have separate address spaces 6 ` Interprocess communication consists of ` Synchronization ` Movement of data from one process’s address space to another’s CS3211 2009-10 by Abhik Roychoudhury
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 The programming model in MPI ` Communicating Sequential Processes ` Each process runs in its local address space. ` Processes exchange data and synchronize by message passing ` Often, but not always, the same code may be executed by all processes. CS3211 2009-10 by Abhik Roychoudhury 7 Cooperative Operations for Communication ` Message-passing approach makes the exchange of data cooperative ` Data is explicitly sent by one process and received by another ` Advantage: ` Any change in the receiving process’s memory is made with the receiver’s active participation 8 receivers active participation. ` Communication and synchronization are combined. Process 0 Process 1 send (data) receive (data) CS3211 2009-10 by Abhik Roychoudhury Shared Memory communication in Java Shared heap Java program compiled into bytecodes. Bytecodes are interpreted by the Java Virtual Machine.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 12/02/2011 for the course CS 3211 taught by Professor Dunnowho during the Spring '11 term at National University of Singapore.

Page1 / 10

Lec10-MPI-1 - Concurrency and Parallelism Threads A...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online