cs140-matmul

cs140-matmul - CS 140 Matrix multiplication CS 140 Matrix...

Info iconThis preview shows pages 1–9. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CS 140 : Matrix multiplication CS 140 : Matrix multiplication • Matrix multiplication I : parallel issues • Matrix multiplication II: cache issues Thanks to Jim Demmel and Kathy Yelick (UCB) for some of these slides Communication volume model Communication volume model • Network of p processors • Each with local memory • Message-passing • Communication volume ( v ) • Total size (words) of all messages passed during computation • Broadcasting one word costs volume p (actually, p-1 ) • No explicit accounting for communication time • Thus, can’t really model parallel efficiency or speedup; for that, we’d use the latency-bandwidth model (see extra slides) Matrix-Matrix Multiplication Matrix-Matrix Multiplication {implements C = C + A*B} for i = 1 to n for j = 1 to n for k = 1 to n C(i,j) = C(i,j) + A(i,k) * B(k,j) = + * C(i,j) C(i,j) A(i,:) B(:,j) Algorithm has 2*n 3 = O(n 3 ) Flops and operates on 3*n 2 words of memory Parallel Matrix Multiply Parallel Matrix Multiply • Compute C = C + A*B • Basic sequential algorithm: • C(i,j) += A(i,1)*B(1,j) + A(i,2)*B(1,j) +…+ A(i,n)*B(n,j) • work = t 1 = 2n 3 floating point operations (“flops”) • Variables are: • Data layout • Structure of communication • Schedule of communication Parallel Matrix Multiply with 1D Column Layout Parallel Matrix Multiply with 1D Column Layout • Assume matrices are n x n and n is divisible by p • A(i) is the n-by-n/p block column that processor i owns (similarly B(i) and C(i)) • B(i,j) is a n/p-by-n/p sublock of B(i) • in rows j*n/p through (j+1)*n/p • Then: C(i) += A(0)*B(0,i) + A(1)*B(1,i) +…+ A(p-1)*B(p-1,i) p0 p1 p2 p3 p5 p4 p6 p7 (A reasonable assumption for analysis, not for code) Matmul for 1D layout on a Processor Matmul for 1D layout on a Processor Ring Ring • Proc k communicates only with procs k-1 and k+1 • Different pairs of processors can communicate simultaneously • Round-Robin “Merry-Go-Round” algorithm Copy A(myproc) into MGR (MGR = “Merry-Go-Round”) C(myproc) = C(myproc) + MGR*B(myproc , myproc) for j = 1 to p-1 send MGR to processor myproc+1 mod p (but see deadlock below) receive MGR from processor myproc-1 mod p (but see below) C(myproc) = C(myproc) + MGR * B( myproc-j mod p , myproc) Matmul for 1D layout on a Processor Ring Matmul for 1D layout on a Processor Ring • One iteration: v = n 2 • All p-1 iterations: v = (p-1) * n 2 ~ pn 2 • Optimal for 1D data layout: • Perfect speedup for arithmetic • A(myproc) must meet each C(myproc) • “Nice” communication pattern – can probably overlap independent communications in the ring. • In latency/bandwidth model (see extra slides), parallel efficiency e = 1 - O(p/n) MatMul with 2D Layout MatMul with 2D Layout • Consider processors in 2D grid (physical or logical) • Processors can communicate with 4 nearest neighbors • Alternative pattern: broadcast along rows and columns • Assume p is square s x s grid p(0,0) p(0,1) p(0,2) p(1,0)...
View Full Document

{[ snackBarMessage ]}

Page1 / 47

cs140-matmul - CS 140 Matrix multiplication CS 140 Matrix...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online