Carnegie Mellon Parralel Computing Notes on Lecture 4

Why how can we x it while still using blocking

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: f communication due to implementation details (e.g., caching) CMU 15-418, Spring 2014 Message passing model ▪ Grid data stored in four separate address spaces (four private arrays) Thread 1 Address Space Thread 2 Address Space Thread 3 Address Space Thread 4 Address Space CMU 15-418, Spring 2014 Replication required to perform computation Required for correctness Thread 1 Address Space Example: Thread 1 and 3 send row to thread 2 (otherwise thread 2 cannot update its local cells) “Ghost cells”: Send row Thread 2 Address Space Grid cells replicated from remote address space. Thread 2 logic: Send row Thread 3 Address Space Thread 4 Address Space cell_t ghost_row_top[N+2]; // ghost row storage cell_t ghost_row_bot[N+2]; // ghost row storage int bytes = sizeof(cell_t) * (N+2); recv(ghost_row_top, bytes, pid- 1, TOP_MSG_ID); recv(ghost_row_bot, bytes, pid+1, BOT_MSG_ID); // Thread 2 now has data necessary to perform // computation CMU 15-418, Spring 2014 Message passing solver Note similar structure to shared address space solver, but now communication is explicit Send and receive ghost rows Perform computation All threads send local mydiff to thread 0 Thread 0 computes termination, predicate sends result back to all other threads Example from: Culler, Singh, and Gupta CMU 15-418, Spring 2014 Notes on message passing example ▪ Computation - Array indexing is relative to local address space (not global grid coordinates) ▪ Communication: - Performed through messages En masse, not element at a time. Why? ▪ Synchronization: - Performed through sends and receives Think of how to implement mutual exclusion, barriers, ags using messages ▪ For convenience: message passing libraries often include higher-level primitives (implemented using send and receive) Alternative solution using reduce/broadcast constructs CMU 15-418, Spring 2014 Send and receive variants Send/Recv Synchronous Asynchronous Blocking async Non-blocking async ▪ Synchronous: - SEND: call returns when message data resides in address space of receiver (and sender has received acknowledgement that this is the case) - RECV: call returns when data from message copied into address space of receiver and acknowledgement sent) Sender: Receiver: Call SEND() Copy data from sender’s address space buffer into network buffer Send message Call RECV() Receive ack SEND() returns Receive message Copy data into receiver’s address space buffer Send ack RECV() returns CMU 15-418, Spring 2014 As implemented on previous slide, if our message passing solver uses blocking send/recv it would deadlock! Why? How can we x it? (while still using blocking send/recv) CMU 15-418, Spring 2014 Message passing solver This code will deadlock. Why? Send and receive ghost rows Perform computation All threads send local mydiff to thread 0 Thread 0 computes termination, predicate sends result back to all other threads Example from: Culler, Singh, and Gupta CMU 15-418, Spring 2014 Send and receive variants Send/Recv Synchronous Asynchronous Blocking async Non-blocking async ▪ Async blocking: - SEND: call copies data from address space into system buffers, then returns - Does not guarantee message has been received (or even sent) - RECV:...
View Full Document

Ask a homework question - tutors are online