111 - Message Passing From Parallel Computing to the Grid...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Message Passing: From Parallel Computing to the Grid Geoffrey Fox Indiana University Computer Science, Informatics and Physics Community Grid Computing Laboratory, 501 N Morton Suite 224, Bloomington IN 47404 [email protected] Parallel Computing Over the past decades, the computational science community has debated back and forth the best architecture for parallel computing; sometimes it’s distributed memory; sometimes SIMD (synchronous as in CM-1 and CM-2 from Thinking Machines); sometimes its MMD (multiple instruction multiple data as in networked computers); sometimes its shared memory; sometimes vector nodes; sometimes multi-threaded; and sometimes more or less all of the above. This debate has been enlivened recently by the high performance achieved by the 40 teraflop Japanese Earth Simulator supercomputer using a slightly heretical architecture. The arguments are accompanied by a related discussion as to the appropriate parallel computing model. Whatever the machine architecture, users would certainly like to just write their software once and see it mapped efficiently onto the parallel hardware. Experience has found there to be an almost irreconcilable difference between the way users would like to write their software and the way machines would like to be instructed to run efficiently. In particular the natural languages for sequential machines do not easily parallelize. It is interesting that even while languages are improving (Fortran, C, C++, Java, Python) it has got no easier to write parallel codes. Most science and engineering simulations are intrinsically parallel (as “nature is parallel” perhaps) but the obvious expression of these problems in today’s common languages runs poorly on most parallel machines. Of course there is continuing major effort on better parallel compilers and runtime but it is a difficult battle. Expressing most problems in existing languages leads to parallelism which is not explicit but a consequence of complex dependencies which are often only discoverable at runtime. This leads to the disappointing conclusion that the user must help the computer in some way or other. Then of course the different architectures suggest different programming models (openMP, HPF, MPI …). However the conservative user will express the parallelism explicitly by dividing up the defining data domain and breaking it up into parts. Each part is managed as a separate process (the SPMD or single program multiple data model) which then communicate via messages. This messaging is usually implemented with MPI today. This use of message passing in parallel computing is a reasonable decision as the resultant code probably runs well on all architectures. This choice is not a trivial decision as it requires substantial additional work over and above that needed in the sequential case Messaging in Grids and Peer-to-Peer Networks
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Now let us consider the Grid and peer-to-peer (P2P) networks discussed in previous columns. Here we are not given a single large scale simulation – the archetypical parallel
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 6

111 - Message Passing From Parallel Computing to the Grid...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online