{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lecture5

# Lecture5 - College of Information Technology Master Program...

This preview shows pages 1–13. Sign up to view the full content.

College of Information Technology Master Program in Scientific Computing Scientific Computing II (SCOM6301) Introduction to Parallel Computing Lecture 5 Implementation Styles and Examples CH03

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Decomposing Programs for Parallelism Implementation styles Iterative loops Recursive transversal of tree-like data structures
Iterative loops Parallel loop programming: on sharedmemory systems; be aware of data races(dependences) SPMD programming (single-program,multiple data) Recursive task programming Not considered at this satage

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Parallel loop programming: To parallelize loops we assign different iterations, or different blocks of iterations, to different processors. On shared-memory systems, this decomposition is usually coded as some kind of PARALLEL DO loop
Example DO I = 1, N A(I) = A(I) + C ENDDO Is there data sharing in this code?

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Example 2 DO I = 1, N A(I) = A(I+1) + C ENDDO Is there data sharing in this code? Write after read race
Our Focus in loop parallelism: Is the discovery of loops that have no races. Example SUM = 0.0 DO I = 1, N R = F(B(I),C(I)) ! an expensive computation SUM = SUM + R ENDDO Are there races here? The variable SUM, which is written and read on every iteration Assuming that the computation of function F is expensive , how can we gain from parallelism in this case?

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
We can gain some speed if we compute the values of F in parallel and then update SUM in the order in which those computations finish. To make this work, we must ensure that only one processor updates SUM at a time and each finishes before the next is allowed to begin.
critical regions code segments that can be executed by only one processor at a time are used on shared-memory systems. Possible realization of the parallel version: SUM = 0.0 PARALLEL DO I = 1, N R = F(B(I),C(I)) ! an expensive computation BEGIN CRITICAL REGION SUM = SUM + R END CRITICAL REGION ENDDO

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
SPMD Programming To perform the sum reduction above on a distributed memory message-passing system will need to rewrite the program to use explicit message passing. In an SPMD program, all of the processors execute the same code, but apply the code to different portions of the data.
Considerations for SPMD Scalar variables are typically replicated on all of the processors and redundantly computed (to identical values) on each processor. In addition, the programmer must insert explicit communication primitives in order to pass the shared data between processors.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
The previous using SPMD program ! This code is executed by all processors ! MYSUM, MYFIRST, MYLAST, R, and I are private local variables ! MYFIRST and MYLAST are computed separately on each
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern