129 - John von Neumann Institute for Computing Distributed...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
John von Neumann Institute for Computing Distributed Shared Memory in a Grid Environment J.P. Ryan, B.A. Coghlan published in Parallel Computing: Current & Future Issues of High-End Computing , Proceedings of the International Conference ParCo 2005, G.R. Joubert, W.E. Nagel, F.J. Peters, O. Plata, P. Tirado, E. Zapata (Editors), John von Neumann Institute for Computing, J¨ulich, NIC Series, Vol. 33 , ISBN 3-00-017352-8, pp. 129-136, 2006. c c 2006 by John von Neumann Institute for Computing Permission to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires prior specific permission by the publisher mentioned above. http://www.fz-juelich.de/nic-series/volume33
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Distributed Shared Memory in a Grid Environment John P Ryan a , Brian A Coghlan a , { john.p.ryan, coghlan } @cs.tcd.ie a Computer Architecture Group, Dept. of Computer Science, Trinity College Dublin, Dublin 2, Ireland. 1. Abstract Software distributed shared memory (DSM) aims to provide the illusion of a shared memory en- vironment when physical resources do not allow for it. Here we will apply this execution model to the Grid. Typically a DSM runtime incurs substantial overheads that result in severe degradation in performance of an application with respect to a more efFcient message passing implementation. We examine mechanisms that have the potential to increase DSM performance by minimizing high- latency inter-process messages and data transfers. Relaxed consistency models are investigated, as well as the use of a grid information system to ascertain topology information. The latter allows for hierarchy-aware management of shared data and synchronization variables. The process of incre- mental hybridization, where more efFcient message-passing mechanisms can incrementally replace those DSM actions that adversely effect performance, is also explored. 2. Introduction The message passing programming paradigm enables the construction of parallel applications while minimizing the impact of distributing an algorithm across multiple processes by providing simple mechanisms to transfer shared application data between the processes. However, consider- able burden is placed on the programmer whereby send/receive message pairs must be explicitly declared, and this can often be a source of errors. Implementations of message passing paradigms currently exist for grid platforms [11]. The shared memory paradigm is a simpler paradigm for constructing parallel applications, as it of- fers uniform access methods to memory for all user threads of execution, removing the responsibility of explicitly instrumenting the code with data transfer routines, and hence offers a less burdensome method to construct the applications. Its primary disadvantage is its limited scalability; nonethe- less, a great deal of parallel software has been written in this manner. A secondary disadvantage
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 9

129 - John von Neumann Institute for Computing Distributed...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online