This preview shows page 1. Sign up to view the full content.
Unformatted text preview: n,p)
(i.e. computational complexity usually higher than
complexity of communication  same is often true of σ(n)
as well.) ϕ(n) usually O(n) or higher • κ(n,p) often O(1) or O(log2P) • Increasing n allows ϕ(n) to dominate κ(n,p) • Thus, increasing n increases the speedup Ψ for some
number of processors •
• Another cheat to get good results  make n large
Most benchmarks have standard sized inputs to preclude
this Tuesday, February 14, 12 Amdahl Effect
n=100000
Speedup
n=10000
n=1000 Number of processors
Tuesday, February 14, 12 Summary
•
•
• Tuesday, February 14, 12 Allows speedup to be computed for
• ﬁxed problem size n
• varying number of processes
Ignores communication costs
Is optimistic, but gives an upper bound GustafsonBarsis’ Law
How does speedup scale with larger problem
sizes?
Given a ﬁxed amount of time, how much bigger of
a problem can we solve by adding more
processors?
Large problem sizes often correspond to better
resolution and precision on the problem being
solved. Tuesday, February 14, 12 Basic terms
Speedup is
Because κ(n,p) > 0
Let s be the fraction of time in a parallel execution of the
program that is spent performing sequential operations.
Then, (1s) is the fraction of time spent in a parallel
execution of the program performing parallel operations. Tuesday, February 14, 12 Note that Amdahl's Law looks at the sequential
and parallel parts of the program for a given
problem size, and the value of f is the fraction
in a sequential execution that is inherently
sequential
Or stated differently . . . Tuesday, February 14, 12 Speedup in terms of the serial
fraction of a program Given this formulation, the fraction of the program that
is serial is simply
Speedup can be rewritten in terms of f:
This gives us Amdahl’s Law.
Note number of processors not mentioned for
deﬁnition of f because f is for time in a sequential run
Tuesday, February 14, 12 Some definitions
The sequential part of a
parallel computation:
The parallel part of a
parallel computation: And the speedup Tuesday, February 14, 12 Difference between GB Law
and Amdahl’s Law
The serial portion in
Amdahl’s law is a fraction
of the total execution
time of the program.
The serial portion in GB
is a fraction of the parallel
execution time of the
program. To use GB Law
we assume work scales to
maintain value of s
Tuesday, February 14, 12 Deriving GB Law
substitute for
(s + (1  s)p) First, we
show that the
formula circled
in blue leads to
our speedup
formula. Multiply through simplify,
simply
Tuesday, February 14, 12 Deriving GB Law
Second, we show
that the formula
circled in blue leads
(that we just showed
is equivalent to
speedup) to the GB
Law formula. Tuesday, February 14, 12 An example
An application executing on 64 processors
requires 220 seconds to run. It is
experimentally determined through
benchmarking that 5% of the time is spent in the
serial code on a single processor. What is the
scaled speedup of the application?
s = 0.05, thus on 64 processors
Ψ = 64 + (164)(0.05) = 64  3.15 = 60.85 Tuesday, February 14, 12 An example
Another way of looking at this: given P
processors, P a...
View
Full
Document
This note was uploaded on 02/19/2012 for the course ECE 563 taught by Professor Staff during the Spring '08 term at Purdue UniversityWest Lafayette.
 Spring '08
 Staff

Click to edit the document details