{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}


For this particular algorithm the sequential portion

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: cessors: p • PROBLEM: – How to scale benchmark to run on more processors • CONSTRAINTS – Memory-constrained scaling: keep the amount of memory th per processor constant – Time-constrained scaling: keep total execution time constant (assuming perfect speedup) 55 Fallacy: Amdahl’s Law doesn’t apply to parallel computers to parallel computers • Since some part linear, can’t go 100X? • 1987 claim to break it, since 1000X speedup – researchers scaled the benchmark to have a data set size that is 1000 times larger and compared the uniprocessor and parallel execution times of the scaled benchmark. For this particular algorithm the sequential portion of the program was constant independent of the size of the input, and the rest was fully parallel—hence, linear speedup with 1000 processors sequential component scales with data too • Usually sequential component scales with data too 56 Fallacy: Linear speedups are needed to make multiprocessors cost make multiprocessors cost-effective Mark Hill & David Wood 1995 study Hill 1995 Compare costs of SGI uniprocessor and MP Uniprocessor = $38,400 + $100 * MByte $38 $100 MByte MP = $81,600 + $20,000 * P + $100 * MB 1 GB, Uni = $138k vs....
View Full Document

{[ snackBarMessage ]}