Fortran mpi zpl scales better than mpi since its

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: , i3-1) + r(i1-1,i2, i3+1) y1(i1-1) = r(i1-1,i2-1,i3-1) + r(i1-1,i2-1,i3+1) + r(i1-1,i2+1,i3-1) + r(i1-1,i2+1,i3+1) enddo do j1=2,m1j-1 i1 = 2*j1-d1 y2 = r(i1, i2-1,i3-1) + r(i1, i2-1,i3+1) + r(i1, i2+1,i3-1) + r(i1, i2+1,i3+1) x2 = r(i1, i2-1,i3 ) + r(i1, i2+1,i3 ) + r(i1, i2, i3-1) + r(i1, i2, i3+1) s(j1,j2,j3) = 0.5D0 * r(i1,i2,i3) + 0.25D0 * (r(i1-1,i2,i3) + r(i1+1,i2,i3) + x2) + 0.125D0 * ( x1(i1-1) + x1(i1+1) + y2) + 0.0625D0 * ( y1(i1-1) + y1(i1+1) ) enddo enddo enddo j = k-1 call comm3(s,m1j,m2j,m3j,j) return end rprj3 in ZPL procedure rprj3(var S,R: [,,] double; d: array of direction); begin S := 0.5000 * R + 0.2500 * (R@^d[ 1, 0, 0] + R@^d[ 0, 1, 0] + R@^d[-1, 0, 0] + R@^d[ 0,-1, 0] + 0.1250 * (R@^d[ 1, 1, 0] + R@^d[ 1, 0, 1] + R@^d[ 1,-1, 0] + R@^d[ 1, 0,-1] + R@^d[-1, 1, 0] + R@^d[-1, 0, 1] + R@^d[-1,-1, 0] + R@^d[-1, 0,-1] + 0.0625 * (R@^d[ 1, 1, 1] + R@^d[ 1, 1,-1] + R@^d[ 1,-1, 1] + R@^d[ 1,-1,-1] + R@^d[-1, 1, 1] + R@^d[-1, 1,-1] + R@^d[-1,-1, 1] + R@^d[-1,-1,-1]); end; R@^d[ R@^d[ R@^d[ R@^d[ R@^d[ R@^d[ 0, 0, 1] + 0, 0,-1] + 0, 1, 1] + 0, 1,-1] + 0,-1, 1] + 0,-1,-1])+ Code Size 1200 communication 1000 declarations computation Lines of Code 800 566 600 400 202 200 0 F+MPI 87 95 70 242 77 ZPL A-ZPL Language Code Size Notes •  the ZPL codes are 5.5–6.5x shorter because 1200 it supports a global view of parallelism rather than an SPMD programming model ⇒ little/no code for communication 1000 ⇒ little/no code for array bookkeeping communication decla...
View Full Document

Ask a homework question - tutors are online