11f6643lec03 - Parallel Programming Parallel Programming...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Parallel Programming Parallel Programming Paradigms Explicit vs. implicit parallel programming ata vs. Control Parallelism Data vs. Control Parallelism Shared Memory Programming hreads Threads Message Passing Programming essage Passing Interface (MPI) Message Passing Interface (MPI) Explicit & Implicit Parallel Programming Explicit Parallel Programming arallel part spelled out by programmer Parallel part spelled out by programmer Hard for programmer Implicit Parallel Programming Implicit Parallel Programming Parallelism extracted by compiler Easier for programmer Hard for compiler writer Parallelism is not fully exploited practice, combination of the two In practice, combination of the two Control Parallelism Simultaneous execution of different instruction streams .g., pipelining e.g., pipelining MIMD machines Suitable for process pipelining, functional parallelism Suitable for process pipelining, functional parallelism Data Parallelism Specify a sequence of ops. on many data elements ync. processes after each such sequence Sync. processes after each such sequence SIMD if seq. size is 1 cktep execution of instrs. lock-step execution of instrs. SPMD (single program multiple data) if seq. size > 1 Data Parallel Programming Parallel operations specified using special constructs ata parallel languages provide such constructs, e.g., Data parallel languages provide such constructs, e.g., H P F eed to specify data distribution explicitly for efficiency Need to specify data distribution explicitly for efficiency Example !HPF$ PROCESSORS P(10) real x(100), y(100), z(100) !HPF$ ALIGN Y(:) with x(:) !HPF$ ALIGN z(:) with x(:) !HPF$ DISTRIBUTE X(BLOCK) ONTO P z=x+y SPMD Programming Based on data parallel programming ll processes execute the same program All processes execute the same program Variations in instructions executed achieved using if atements. For example, statements. For example, if (myid==0) { ... } else { ---} Sync. among processes at specific places Sync. among processes at specific places multiple groups of processes may synchronize Message Passing Programming Processes communicate by explicit messages end message and receive message primitives Send message and receive message primitives More complex primitives based on send and receive broadcast, scatter, gather, barrier, ... blocking vs. nonblocking sends and receives Send and Receive Operations Blocking sends & receives rocess waits until it send or receive is complete, or can be Process waits until it send or receive is complete, or can be completed correctly regardless of the further work done by it Easier to program Inefficient if communication latency is high or if processes sends/receives are not matched appropriately onlocking sends & receives Non-blocking sends & receives Process does not wait for sends and receives to complete For the case of send, the data sent should not be modified until...
View Full Document

Page1 / 13

11f6643lec03 - Parallel Programming Parallel Programming...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online