Carnegie Mellon Parralel Computing Notes on Lecture 6

Cells send row thread 2 address space grid cells

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: s (recall lecture 3) Processor Processor Local Cache Local Cache Memory Memory Network CMU 15-418, Spring 2014 Review: assignment in a shared address space ▪ Grid data resided in a single array in shared address space - Array was accessible to all threads ▪ Each thread manipulated the region it was assigned to process - Assignment decisions impacted performance Different assignments could yield different amounts of communication CMU 15-418, Spring 2014 Message passing model ▪ Grid data stored in four separate address spaces (four private arrays) Thread 1 Address Space Thread 2 Address Space Thread 3 Address Space Thread 4 Address Space CMU 15-418, Spring 2014 Replication required to perform computation Required for correctness Thread 1 Address Space Example: Thread 1 and 3 send row to thread 2 (otherwise thread 2 cannot update its local cells) “Ghost cells”: Send row Thread 2 Address Space Grid cells replicated from remote address space. It’s common to say that information in ghost cells is “owned” by other threads. Thread 2 logic: Send row Thread 3 Address Space Thread 4 Address Space cell_t ghost_row_top[N+2]; // ghost row storage cell_t ghost_row_bot[N+2]; // ghost row storage int bytes = sizeof(cell_t) * (N+2); recv(ghost_row_top, bytes, pid- 1, TOP_MSG_ID); recv(ghost_row_bot, bytes, pid+1, BOT_MSG_ID); // Thread 2 now has data necessary to perform // computation CMU 15-418, Spring 20...
View Full Document

This document was uploaded on 03/19/2014 for the course CMP 15-418 at Carnegie Mellon.

Ask a homework question - tutors are online