Carnegie Mellon Parralel Computing Notes on Lecture 6

Data parallel model synchronization forall loop

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ot be modi ed by calling thread since message processing occurs concurrently with thread execution - RECV: call posts intent to receive, returns immediately. Use SENDPROBE, RECVPROBE to determine actual send/receipt status Sender: Receiver: Call SEND(foo) SEND(foo) returns handle h1 Call RECV(bar) RECV(bar) returns handle h2 Copy data from ‘foo’ into network buffer Send message Call SENDPROBE(h1) // if message sent, now safe for thread to modify ‘foo’ RED TEXT = executes concurrently with application thread Receive message Messaging library copies data into ‘bar’ Call RECVPROBE(h2) // if received, now safe for thread // to access ‘bar’ CMU 15-418, Spring 2014 Variants of send and receive messages Send/Recv Synchronous Asynchronous Blocking async Non-blocking async The variants of send/recv provide different levels of programming complexity / opportunity to optimize performance CMU 15-418, Spring 2014 Solver implementation in THREE programming models 1. Data-parallel model - Synchronization: - forall loop iterations are independent (can be parallelized) - Implicit barrier at end of outer forall loop body - Communication - Implicit in loads...
View Full Document

This document was uploaded on 03/19/2014 for the course CMP 15-418 at Carnegie Mellon.

Ask a homework question - tutors are online