This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: CS 140 : Matrix multiplication CS 140 : Matrix multiplication • Matrix multiplication I : parallel issues • Matrix multiplication II: cache issues Thanks to Jim Demmel and Kathy Yelick (UCB) for some of these slides Communication volume model Communication volume model • Network of p processors • Each with local memory • Messagepassing • Communication volume ( v ) • Total size (words) of all messages passed during computation • Broadcasting one word costs volume p (actually, p1 ) • No explicit accounting for communication time • Thus, can’t really model parallel efficiency or speedup; for that, we’d use the latencybandwidth model (see extra slides) MatrixMatrix Multiplication MatrixMatrix Multiplication {implements C = C + A*B} for i = 1 to n for j = 1 to n for k = 1 to n C(i,j) = C(i,j) + A(i,k) * B(k,j) = + * C(i,j) C(i,j) A(i,:) B(:,j) Algorithm has 2*n 3 = O(n 3 ) Flops and operates on 3*n 2 words of memory Parallel Matrix Multiply Parallel Matrix Multiply • Compute C = C + A*B • Basic sequential algorithm: • C(i,j) += A(i,1)*B(1,j) + A(i,2)*B(1,j) +…+ A(i,n)*B(n,j) • work = t 1 = 2n 3 floating point operations (“flops”) • Variables are: • Data layout • Structure of communication • Schedule of communication Parallel Matrix Multiply with 1D Column Layout Parallel Matrix Multiply with 1D Column Layout • Assume matrices are n x n and n is divisible by p • A(i) is the nbyn/p block column that processor i owns (similarly B(i) and C(i)) • B(i,j) is a n/pbyn/p sublock of B(i) • in rows j*n/p through (j+1)*n/p • Then: C(i) += A(0)*B(0,i) + A(1)*B(1,i) +…+ A(p1)*B(p1,i) p0 p1 p2 p3 p5 p4 p6 p7 (A reasonable assumption for analysis, not for code) Matmul for 1D layout on a Processor Matmul for 1D layout on a Processor Ring Ring • Proc k communicates only with procs k1 and k+1 • Different pairs of processors can communicate simultaneously • RoundRobin “MerryGoRound” algorithm Copy A(myproc) into MGR (MGR = “MerryGoRound”) C(myproc) = C(myproc) + MGR*B(myproc , myproc) for j = 1 to p1 send MGR to processor myproc+1 mod p (but see deadlock below) receive MGR from processor myproc1 mod p (but see below) C(myproc) = C(myproc) + MGR * B( myprocj mod p , myproc) Matmul for 1D layout on a Processor Ring Matmul for 1D layout on a Processor Ring • One iteration: v = n 2 • All p1 iterations: v = (p1) * n 2 ~ pn 2 • Optimal for 1D data layout: • Perfect speedup for arithmetic • A(myproc) must meet each C(myproc) • “Nice” communication pattern – can probably overlap independent communications in the ring. • In latency/bandwidth model (see extra slides), parallel efficiency e = 1  O(p/n) MatMul with 2D Layout MatMul with 2D Layout • Consider processors in 2D grid (physical or logical) • Processors can communicate with 4 nearest neighbors • Alternative pattern: broadcast along rows and columns • Assume p is square s x s grid p(0,0) p(0,1) p(0,2) p(1,0)...
View
Full Document
 Spring '09
 Central processing unit, CPU cache, Cache L2 Cache, Cannon Skewing

Click to edit the document details