CS3211-Tut11 - CS3211 Parallel and Concurrent Programming...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
CS3211 Parallel and Concurrent Programming – Guidelines for tutorial (12-16 April 2010) Sample Exercises: [Please conduct these as an interactive discussion, rather than an evaluation. Please also make it clear to the students that they are not being evaluated for their performance in these exercises, so that they are not afraid to make mistakes while answering.] MPI usage instructions See the file tembusu-MPI-access.pdf in Workbin\Assignments 1. MPI Example Warmup– Management of groups. What will be printed by the following program (and why)? Assume MPI_Group_incl creates a new group (which is then used within a communicator). #include "mpi.h" #include <stdio.h> #include <stdlib.h> #define NPROCS 8 int main (int argc, char *argv[]) { int rank, new_rank, sendbuf, recvbuf, numtasks, ranks1[4]={0,1,2,3}, ranks2[4]={4,5,6,7}; MPI_Group orig_group, new_group; MPI_Comm new_comm; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &numtasks); if (numtasks != NPROCS) { printf("Must specify %d tasks. Terminating.\n",NPROCS); MPI_Finalize(); exit(0); } sendbuf = rank; /* Extract the original group handle */ MPI_Comm_group(MPI_COMM_WORLD, &orig_group); /* Divide tasks into two distinct groups based upon rank */ if (rank < NPROCS/2) { MPI_Group_incl(orig_group, NPROCS/2, ranks1, &new_group); } else { MPI_Group_incl(orig_group, NPROCS/2, ranks2, &new_group); } /* Create new new communicator and then perform collective communications */ MPI_Comm_create(MPI_COMM_WORLD, new_group, &new_comm); MPI_Allreduce(&sendbuf, &recvbuf, 1, MPI_INT, MPI_SUM, new_comm); MPI_Group_rank (new_group, &new_rank); printf("rank= %d newrank= %d recvbuf= %d\n",rank,new_rank,recvbuf); MPI_Finalize(); }
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
communicators and assign new ranks to the processes. 2. What will happen when we run the following program? #include "mpi.h" #include <stdio.h> #include <stdlib.h> int main (int argc, char *argv[]) { int numtasks, taskid, len, buffer, root, count; char hostname[MPI_MAX_PROCESSOR_NAME]; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &numtasks); MPI_Comm_rank(MPI_COMM_WORLD,&taskid); MPI_Get_processor_name(hostname, &len); printf ("Task %d on %s starting. ..\n", taskid, hostname); buffer = 23; root = 0; count = taskid; if (taskid == root) printf("Root: Number of MPI tasks is: %d\n",numtasks); MPI_Bcast(&buffer, count, MPI_INT, root, MPI_COMM_WORLD); MPI_Finalize(); } Answer: Try it in tembusu. The program will ``hang”. The count being used as argument in the broadcast library is wrong. Change the count to 1, and the problem should disappear.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 7

CS3211-Tut11 - CS3211 Parallel and Concurrent Programming...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online