ALGORITHMS WITH SHARED
VARIABLES
Again, in this section, we focus on developing simple
algorithms that are not necessarily
very efficient. Shared-memory architectures and their
algorithms will be disc
COMPLEXITY CLASSES
Complexity theory is a branch of computer science that
deals with the ease or difficulty
of solving various computational problems of interest. In
complexity theory, problems are
di
ALGORITHM OPTIMALITY AND
EFFICIENCY
One way in which we use the big-oh and big-omega
notations, introduced in Section
3.1, is as follows. Suppose that we have constructed a valid
algorithm to solve a
ASYMPTOTIC COMPLEXITY
Algorithms can be analyzed in two ways: precise and
approximate. In precise analysis,
we typically count the number of operations of various
types (e.g., arithmetic, memory
acces
PARALLELIZABLE TASKS AND
THE NC CLASS
In 1979, Niclaus Pippenger [Pipp79] suggested that
efficiently parallelizable problems
in P might be defined as those problems that can be solved
in a time period
PARALLEL PROGRAMMING PARADIGMS
Several methods are used extensively in devising efficient
parallel algorithms for solving
problems of interest. A brief review of these methods
(divide and conquer, ran
THE PRAM SHARED-MEMORY
MODEL
The theoretical model used for conventional or sequential
computers (SISD class) is
known as the random-access machine (RAM) (not to be
confused with random-access
memory,
GLOBAL VERSUS DISTRIBUTED
MEMORY
Within the MIMD class of parallel processors, memory can
be global or distributed.
Global memory may be visualized as being in a central
location where all processors
SIMD VERSUS MIMD
ARCHITECTURES
Most early parallel machines had SIMD designs. The
ILLIAC IV computer, briefly
mentioned in Section 1.3, and described in more detail in
Section 23.2, is a well-known
ex
MODELS OF PARALLEL PROCESSING
Associative processing (AP) was perhaps the earliest form
of parallel processing.
Associative or content-addressable memories (AMs, CAMs),
which allow memory cells to
be