{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lecture5 - College of Information Technology Master Program...

Info icon This preview shows pages 1–13. Sign up to view the full content.

View Full Document Right Arrow Icon
College of Information Technology Master Program in Scientific Computing Scientific Computing II (SCOM6301) Introduction to Parallel Computing Lecture 5 Implementation Styles and Examples CH03
Image of page 1

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Decomposing Programs for Parallelism Implementation styles Iterative loops Recursive transversal of tree-like data structures
Image of page 2
Iterative loops Parallel loop programming: on sharedmemory systems; be aware of data races(dependences) SPMD programming (single-program,multiple data) Recursive task programming Not considered at this satage
Image of page 3

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Parallel loop programming: To parallelize loops we assign different iterations, or different blocks of iterations, to different processors. On shared-memory systems, this decomposition is usually coded as some kind of PARALLEL DO loop
Image of page 4
Example DO I = 1, N A(I) = A(I) + C ENDDO Is there data sharing in this code?
Image of page 5

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Example 2 DO I = 1, N A(I) = A(I+1) + C ENDDO Is there data sharing in this code? Write after read race
Image of page 6
Our Focus in loop parallelism: Is the discovery of loops that have no races. Example SUM = 0.0 DO I = 1, N R = F(B(I),C(I)) ! an expensive computation SUM = SUM + R ENDDO Are there races here? The variable SUM, which is written and read on every iteration Assuming that the computation of function F is expensive , how can we gain from parallelism in this case?
Image of page 7

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
We can gain some speed if we compute the values of F in parallel and then update SUM in the order in which those computations finish. To make this work, we must ensure that only one processor updates SUM at a time and each finishes before the next is allowed to begin.
Image of page 8
critical regions code segments that can be executed by only one processor at a time are used on shared-memory systems. Possible realization of the parallel version: SUM = 0.0 PARALLEL DO I = 1, N R = F(B(I),C(I)) ! an expensive computation BEGIN CRITICAL REGION SUM = SUM + R END CRITICAL REGION ENDDO
Image of page 9

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
SPMD Programming To perform the sum reduction above on a distributed memory message-passing system will need to rewrite the program to use explicit message passing. In an SPMD program, all of the processors execute the same code, but apply the code to different portions of the data.
Image of page 10
Considerations for SPMD Scalar variables are typically replicated on all of the processors and redundantly computed (to identical values) on each processor. In addition, the programmer must insert explicit communication primitives in order to pass the shared data between processors.
Image of page 11

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
The previous using SPMD program ! This code is executed by all processors ! MYSUM, MYFIRST, MYLAST, R, and I are private local variables ! MYFIRST and MYLAST are computed separately on each
Image of page 12
Image of page 13
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern