{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Global-Arrays - Overview of the Global Arrays Parallel...

Info iconThis preview shows pages 1–9. Sign up to view the full content.

View Full Document Right Arrow Icon
Overview of the Global Arrays Parallel Software Development Toolkit Bruce Palmer Manoj Kumar Krishnan, Sriram Krishnamoorthy, Ahbinav Vishnu, Daniel Chavarria, Patrick Nichols, Jeff Daily
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Distributed Data vs Shared Memory Distributed Data: Data is explicitly associated with each processor, accessing data requires specifying the location of the data on the processor and the processor itself. (0xf5670,P0) (0xf32674,P5) P0 P1 P2 Data locality is explicit but data access is complicated. Distributed computing is typically implemented with message passing (e.g. MPI)
Background image of page 2
Distributed Data vs Shared Memory (Cont). Shared Memory: Data is an a globally accessible address space, any processor can access data by specifying its location using a global index Data is mapped out in a natural manner (usually corresponding to the original problem) and access is easy. Information on data locality is obscured and leads to loss of performance. (1,1) (150,200) (47,95) (106,171)
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Global Arrays The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may have been corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to delete the image and then insert it again. single, shared data structure/ global indexing e.g., access A(4,3) rather than buf(7) on task 2 Physically distributed data Distributed dense arrays that can be accessed through a shared memory-like style Global Address Space
Background image of page 4
Global Array Model of Computation local memory Shared Object copy to local memory get compute/update local memory Shared Object copy to shared object local memory put Shared view for distributed multi-dimensional arrays Get-Local -> Compute -> Put-Global model of computation Within-node optimization (SIMD, data locality, tuned library use) decoupled from across-node optimization (like MPI) Global view => Productivity; Local compute => Performance transparency (no hidden surprises)
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
One-sided Communication message passing MPI P1 P0 receive send P1 P0 put one-sided communication SHMEM, ARMCI, MPI-2-1S Message Passing: Message requires cooperation on both sides. The processor sending the message (P1) and the processor receiving the message (P0) must both participate. One-sided Communication: Once message is initiated on sending processor (P1) the sending processor can continue computation. Receiving processor (P0) is not involved. Data is copied directly from switch into memory on P0.
Background image of page 6
Global Arrays Approach to Parallel Programming Uses a one-sided communication model One-sided communication supports the creation of a Partitioned Global Address Space (PGAS) programming model Allows developers to access data using a global index instead of supplying index transformations that transform between the original problem space and the location of individual blocks of data described in the (processor,local index) space Simplifies programming enormously in many cases
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
One-sided vs Message Passing Message-passing Communication patterns are regular or at least predictable
Background image of page 8
Image of page 9
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}