ALGORITHMS WITH SHARED
VARIABLES
Again, in this section, we focus on developing simple
algorithms that are not necessarily
very efficient. Shared-memory architectures and their
algorithms will be discussed in more
detail in Chapters 5 and 6.
Semigroup Com
COMPLEXITY CLASSES
Complexity theory is a branch of computer science that
deals with the ease or difficulty
of solving various computational problems of interest. In
complexity theory, problems are
divided into several complexity classes according to thei
ALGORITHM OPTIMALITY AND
EFFICIENCY
One way in which we use the big-oh and big-omega
notations, introduced in Section
3.1, is as follows. Suppose that we have constructed a valid
algorithm to solve a given
problem of size n in g(n) time, where g(n) is a k
ASYMPTOTIC COMPLEXITY
Algorithms can be analyzed in two ways: precise and
approximate. In precise analysis,
we typically count the number of operations of various
types (e.g., arithmetic, memory
access, data transfer) performed in the worst or average
cas
PARALLELIZABLE TASKS AND
THE NC CLASS
In 1979, Niclaus Pippenger [Pipp79] suggested that
efficiently parallelizable problems
in P might be defined as those problems that can be solved
in a time period that is at most
polylogarithmic in the problem size n,
PARALLEL PROGRAMMING PARADIGMS
Several methods are used extensively in devising efficient
parallel algorithms for solving
problems of interest. A brief review of these methods
(divide and conquer, randomization,
approximation) is appropriate at this point
THE PRAM SHARED-MEMORY
MODEL
The theoretical model used for conventional or sequential
computers (SISD class) is
known as the random-access machine (RAM) (not to be
confused with random-access
memory, which has the same acronym). The parallel
version of R
GLOBAL VERSUS DISTRIBUTED
MEMORY
Within the MIMD class of parallel processors, memory can
be global or distributed.
Global memory may be visualized as being in a central
location where all processors
can access it with equal ease (or with equal difficulty
SIMD VERSUS MIMD
ARCHITECTURES
Most early parallel machines had SIMD designs. The
ILLIAC IV computer, briefly
mentioned in Section 1.3, and described in more detail in
Section 23.2, is a well-known
example of such early parallel machines. SIMD implies
tha
MODELS OF PARALLEL PROCESSING
Associative processing (AP) was perhaps the earliest form
of parallel processing.
Associative or content-addressable memories (AMs, CAMs),
which allow memory cells to
be accessed based on contents rather than their physical
l