Chapter 13
Randomized Algorithms
The idea that a process can be “random” is not a modern one; we can trace
the notion far back into the history of human thought and certainly see its
reﬂections in gambling and the insurance business, each of which reach into
ancient times. Yet, while similarly intuitive subjects like geometry and logic
have been treated mathematically for several thousand years, the mathematical
study of probability is surprisingly young; the first known attempts to seriously
formalize it came about in the 1600s. Of course, the history of computer science
plays out on a much shorter time scale, and the idea of randomization has been
with it since its early days.
Randomization and probabilistic analysis are themes that cut across many
areas of computer science, including algorithm design, and when one thinks
about random processes in the context of computation, it is usually in one of
two distinct ways. One view is to consider the world as behaving randomly:
One can consider traditional algorithms that confront randomly generated
input. This approach is often termed
averagecase analysis
, since we are
studying the behavior of an algorithm on an “average” input (subject to some
underlying random process), rather than a worstcase input.
A second view is to consider algorithms that behave randomly: The world
provides the same worstcase input as always, but we allow our algorithm to
make random decisions as it processes the input. Thus the role of randomiza
tion in this approach is purely internal to the algorithm and does not require
new assumptions about the nature of the input. It is this notion of a
randomized
algorithm
that we will be considering in this chapter.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
708
Chapter 13
Randomized Algorithms
Why might it be useful to design an algorithm that is allowed to make
random decisions? A first answer would be to observe that by allowing ran
domization, we’ve made our underlying model more powerful. Efficient de
terministic algorithms that always yield the correct answer are a special case
of efficient randomized algorithms that only need to yield the correct answer
with high probability; they are also a special case of randomized algorithms
that are always correct, and run efficiently
in expectation
. Even in a worst
case world, an algorithm that does its own “internal” randomization may be
able to offset certain worstcase phenomena. So problems that may not have
been solvable by efficient deterministic algorithms may still be amenable to
randomized algorithms.
But this is not the whole story, and in fact we’ll be looking at randomized
algorithms for a number of problems where there exist comparably efficient de
terministic algorithms. Even in such situations, a randomized approach often
exhibits considerable power for further reasons: It may be conceptually much
simpler; or it may allow the algorithm to function while maintaining very little
internal state or memory of the past. The advantages of randomization seem
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '10
 Staff
 Algorithms, Probability, Probability theory, Randomized Algorithms

Click to edit the document details