This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: h
3 Growth of Functions The order of growth of the running time of an algorithm, deﬁned in Chapter 2,
gives a simple characterization of the algorithm’s efﬁciency and also allows us to
compare the relative performance of alternative algorithms. Once the input size It
becomes large enough, merge sort, with its @(n lg n) worstcase running time,
beats insertion sort, whose worstcase running time is @(nz). Although we can
sometimes determine the exact running time of an algorithm, as we did for insertion
sort in Chapter 2, the extra precision is not usually worth the effort of computing
it. For large enough inputs, the multiplicative constants and lowerorder terms of
an exact running time are dominated by the effects of the input size itself. When we look at input sizes large enough to make only the order of growth of
the running time relevant, we are studying the asymptotic efficiency of algorithms.
That is, we are concerned with how the running time of an algorithm increases with
the size of the inpdt in the limit, as the size of the input increases without bound.
Usually, an algorithm that is asymptotically more efﬁcient will be the best choice
for all but very small inputs. ‘ L? This chapter gives several standard methods for simplifying the asymptotic anal
ysis of algorithms. The next section begins by deﬁning several types of “asymptotic
notation,” of which we have already seen an example in Bnotation. Several no~
tational conventions used throughout this book are then presented, and ﬁnally we
review the behavior of functions that commonly arise in the analysis of algorithms. 4. l; W 3.1 Asymptotic notation The notations we use to describe the asymptotic running time of an algorithm
are deﬁned in terms of functions whose domains are the set of natural numbers
N = {0, 1,2, . . .}. Such notations are convenient for describing the worstcase
runningtime function T(n), which is usually deﬁned only on integer input Sizes.
It is sometimes convenient, however, to abuse asymptotic notation in a variety of 42 Chapter 3 Growth of Functions ways. For example, the notation is easily extended to the domain of real numbers
or, alternatively, restricted to a subset of the natural numbers. It is important, how—
ever, to understand the precise meaning of the notation so that when it is abused, it
is not misused. This section deﬁnes the basic asymptotic notations and also intro
duces some common abuses. Gnotation In Chapter 2, we found that the worst—case running time of insertion sort is
T(n) = @(nz). Let us deﬁne what this notation means. For a given function g(n),
we denote by ('9 (g(n)) the set of functions @(g(n)) = { f (n) : there exist positive constants c., (‘2, and no such that
O s c1g(n) 5 f(n) 5 c2g(n) for all n 3 no) .1 A function f (It) belongs to the set @(g(n)) if there exist positive constants cl
and c2 such that it can be “sandwiched” between c1g(n) and c2g(n), for sufﬁ
ciently large n. Because @(g(n)) is a set, we could write “f (n) e @(g(‘n))”
to indicate that f (n) is a member of ®(g(n)). Instead, we will usually write
“f (n) = @(g(n))” to express the same notion. This abuse of equality to denote
set membership may at ﬁrst appear confusing, but we shall see later in this section
that it has advantages. Figure 3.1(a) gives an intuitive picture of functions f (n) and g(n), where we
have that f (n) = ®(g(n)). For all values of n to the right of no, the value of f (It)
lies at or above c1g(n) and at or below c2g(n). In other words, for all n 2 no, the
function f (n), is equal to g(n) to within a constant factor. We say that g(n) is an
asymptotically tight bound for f (n). The deﬁnition of @(g(n)) requires that every member f (n) E @(g(n)) be
asymptotically nonnegative, that is, that f (n) be nonnegative whenever n is suf—
ﬁciently large. (An asymptotically positive function is one that is positive for all
sufﬁciently large n.) Consequently, the function g(n) itself must be asymptotically
nonnegative, or else the set (9 (g(n)) is empty. We shall therefore assume that every
function used within @notation is asymptotically nonnegative. This assumption
holds for the other asymptotic notations deﬁned in this'chapter as well. In Chapter 2, we introduced an informal notion of @notation that amounted
to throwing away lowerorder terms and ignoring the leading coefﬁcient of the
highestorder term. Let us brieﬂy justify this intuition by using the formal def
inition to show that %n2 — 3n = @(nl). To do so, we must determine positive
constants cl, (3;, and no such that lWithin set notation, a colon should be read as “such that." ' 3 .1 Asymptotic notation 43 mm} 630?)
Wt) f0!) f0!)
g0?) !— ——————n ———————n
f(n) = ®(g(n)) ”0 f(n) = 0w» "0 f(n) = Sargon)
(a) (b) (C) vie“ Figure 3.1 Graphic examples of the (9. O, and S2 notations. In each part, the value of no shown is
the minimum possible value; any greater value would also work. (a) @—notation bounds a function to
within constant factors. We write f (n) = @(g(n)) if there exist positive constants no, c1, and ('2 such
that to the right of no, the value of f (n) ,always lies between clg(n) and €2g(n) inclusive. (b) 0—
notation gives an upper bound for a function to within a constant factor. We write f(n) = 0 (g(n))
if there are positive constants no and c such that to the right of no, the value of f(n) always lies on
or below cg(n). (c) Qnolation gives a lower bound for a function to within a constant factor. We
write f (n) = Q(g(n)) if there are positive constants no and c such that to the right of no, the value
of f (n) always lies on or above L'g(rt). 1 2 for all n 2 no. Dividing by n2 yields
1 3 C1_<_———Ecz.
n The righthand inequality can be made to hold for any value of n 2 1 by choosing
c2 2 1/2. Likewise, the lefthand inequality can be made to hold for any value
of n 2 7 by choosing c1 5 1/14. Thus, by choosing cl = 1/14, (72 = 1/2, and
no = 7, we can verify that §n2 — 3n = @(nz). Certainly, other choices for the
constants exist, but the important thing is that some choice exists. Note that these
constants depend on the function §n2 — 3n; a different function belonging to (9 (n2)
would usually require different constants. We can also use the formal deﬁnition to verify that 6n3 75 @(nz). Suppose for
the purpose of contradiction that (:2 and no exist such that 6n3 5 czn2 for all n 2 no.
But then n 5 c2 / 6, which cannot possibly hold for arbitrarily large n, since c2 is
constant. , Intuitively, the lowerorder terms of an asymptotically positive function can be
) ignored in determining asymptotically tight bounds because they are insigniﬁcant for large n. A tiny fraction of the highest—order term is enough to dominate the 2 2 C1” 5 :4»me n2 ——3n g cm «winnin n;<'Vn~r1v~ltm . . .» 44 Chapter 3 Growth of F unctionr lowerorder terms. Thus, setting CI to a value that is slightly smaller than the co!
efﬁcient of the highestorder term and setting c; to a value that is slightly larger
permits the inequalities in the deﬁnition of ®notation to be satisﬁed. The c0ef«
ﬁcient of the highest—order term can likewise be ignored, since it only changes (:1
and C2 by a constant factor equal to the coefﬁcient. ' As an example, consider any quadratic function f (n) = anz + bn + c, where
a, b, and c are constants and a > 0. Throwing away the lowerorder terms and
ignoring the constant yields f (n) = @(nz). Formally, to show. the same thing, we
take the constants c] = 0/4, c2 = 70/4, and no = 2max((b /a), «(let /a)). The
reader may verify that 0 5 cmz 5 cm2 + bn + c S czn2 for all n 3 no. In general,
for any polynomial p(n) = 22:0 am", where the a, are constants and ad > O, we I
have p(n) = 901“) (see Problem 31). Since any constant is a degreeO polynomial, we can express any constant func
tion as @(no), or 9(1). This latter notation is a minor abuse, however, because it is
not clear what variable is tending to inﬁnity.2 We shall often use the notation (9(1)
to mean either a constant or a constant function with respect to some variable. Onotation The @notation asymptotically bounds a function from above and below. When
we have only an asymptotic upper bound, we use 0notation. For a given func
tion g(n), we denote by 0(g(n)) (pronounced “bigoh of g of n” or sometimes just
“oh of g of n”) the set of functions 0(g(n)) = { f (n) : there exist positive constants c and no such that
O 5 f(n) 5 cg(rz) for all rt 3 n0}. We use Onotation to give an upper bound on a function, to within a constant
factor. Figure 3.1(b) shows the intuition behind 0—notation. For all values It to the
right of no, the value of the function f (n) is'on or below g(n). ‘ We write f (n) = 0(g(n)) to indicate that a function f (n) is a member of
the set 0(g(n)). Note that f(n) = @(g(n)) implies f(n) = 0(g(n)), since @—
notation is a stronger notion than Onotation. Written settheoretically, we have
@(g(n)) g 0(g(n)). Thus, our proof that any quadratic function an2 + bn + c,
where a > O, is in @(nz) also shows that any quadratic function is in 0(222). What
may be more surprising is that any linear function an + b is in 0042), which is
easily veriﬁed by taking 0 = a + lb! and no 2 1. ________________________._____._————————————— 2The real problem is that our ordinary notation for functions does not distinguish functionsfrom
values. In lcalculus, the parameters to a function are clearly speciﬁed: the function n2 could be
written as Amnz or even Ariz. Adopting a more rigorous notation, however, would complicate algebraic manipulations, and so we choose to tolerate the abuse. 3.1 Asymptotic notation 45 Some readers who have seen O—notation before may ﬁnd it strange that we
should write, for example, n = 0(n2). In the literature, O—notation is sometimes
used informally to describe asymptotically tight bounds, that is, what we have de.
ﬁned using ®notation. In this book, however, when we write f (n) = 0(g(n)),
we are merely claiming that some constant multiple of g(n) is an asymptotic upper
bound on f (n), with no claim about how tight an upper bound it is. Distinguish
ing asymptotic upper bounds from asymptotically tight bounds has now become standard in the algorithms literature.
Using Onotation, we can often describe the running time of an algorithm merely by inspecting the algorithm’s overall structure. For example, the doubly
nested loop structure of the insertion sort algorithm from Chapter 2 immediately
yields an 0(n2) upper bound on the worstcase running time: the cost of each it
eration of the inner loop is bounded from above by 0(1) (constant), the indices 1'
and j are both at most It, and the inner loop is executed at most once for each of
the n2 pairs of values for i and j. Since 0notation describes an upper bound, when we use it to bound the worst
case running time of an algorithm, we have a bound on the running time of the
algorithm on every input. Thus, the 0(n2) bound on worst—case running time of
insertion sort also applies to its running time on every input. The @(nz) bound
on the worst—case running time of insertion sort, however, does not imply a @(n2)
bound on the running time of insertion sort on every input. For example, we saw
in Chapter 2 that when the input is already sorted, insertion sort runs in 8(n) time. Technically, it is an abuse to say that the running time of insertion sort is 0 (112),
since for a given n, the actual running time varies, depending on the particular
input of size n. When we say “the running time is 0 (n2),” we mean that there is a
function f (n) that is 0(n2) such that for any value of n, no matter what particular
input of size n is chosen, the running time on that input is bounded from above by
the value f (n). Equivalently, we mean that the worstcase running time is 0(n2). SZnotation Just as Onotation provides an asymptotic upper bound on a function, .Q—notation
provides an asymptotic lower bound. For a given function g(n), we denote by
S2 (g(n)) (pronounced “bigomega of g of n” or sometimes just “omega of g of n”) the set of functions 52(g(n)) = { f (n) : there exist positive constants c and no such that
O 5 cg(n) 5 f(n) for alln 3 no}. The intuition behind SZnotation is shown in Figure 3.1(c). ,For all values )1 to the right of no, the value of f (n) is on or above cg (n).
From the deﬁnitions of the asymptotic notations we have seen thus far, it is easy to prove the following important theorem (see Exercise 3.15). 46 Chapter 3 Growth of Functions Theorem 3.1
For any two functions f (n) and g(n), we have f (n) = @(g(n)) if and only if
f0!) = 0(g(n)) and f0!) = 90:01)). I As an example of the application of this theorem, our proof that an2 + bn + c =
@(n2) for any constants a, b, and c, where a > 0, immediately implies that
an2 + bn + c = $2022) and an2 + bn + c = 0(n2). In practice, rather than using
Theorem 3.1 to obtain asymptotic upper and lower bounds from asymptotically
tight bounds, as we did for this example, we usually use it to prove asymptotically
tight bounds from asymptotic upper and lower bounds. Since Q—notation describes a lower bound, when we use it to bound the beSt—case
running time of an algorithm, by implication we also bound the running time of the
algorithm on arbitrary inputs as well. For example, the bestcase running time of
insertion sort is El (n), which implies that the running time of insertion sort is $2.01). The running time of insertion sort therefore falls between $201) and 0 (n2), since
it falls anywhere between a linear function of n and a quadratic function of n.
Moreover, these bounds are asymptotically as tight as possible: for instance, the
running time of insertion sort is not S2012), since there exists an input for which
insertion sort runs in @(n) time (e.g., when the input is already sorted). It is not
contradictory, however, to say that the worstcase running time of insertion sort
is S2012), since there exists an input that causes the algorithm to take $2012) time.
When we say that the running time (no modiﬁer) of an algorithm is Q(g(n)), we
mean that no matter what particular input of size n is chosen for each value of n,
the running time on that input is at least a constant times g(n), for sufﬁciently large n. Asymptotic notation in equations and inequalities We have already seen how asymptotic notation can be used within mathematical
formulas. For example, in introducing Onotation, we wrote “n = 0012)." We
might also write 2112 + 3n + l = 2n2 + @(n). How do we interpret such formulas? When the asymptotic notation stands alone on the right—hand side of an equation
(or inequality), as in n = 0 (n2), we have already deﬁned the equal sign to mean set
membership: n e 0012). In general, however, when asymptotic notation appears
in a formula, we interpret it as standing for some anonymous function that we do
not care to name. For example. the formula 2n2 + 3n + 1 = 2n2 + @(n) means
that 2n2 + 3n + l = 2n2 + f (n), where f (n) is some function in the set @(n). In
this case, f (n) = 3n +.1, which indeed is in @(n). Using asymptotic notation in this manner can help eliminate inessential detail
and clutter in an equatiOn. For example, in Chapter 2 we expressed the worstcase
running time of merge sort as therecurtence T(n) == 2T(n/2) + (9(a). 3.1 Asymptotic notation 47 If we are interested only in the asymptotic behavior of T(n), there is no point in
specifying all the lowerorder terms exactly; they are all understood to be included
in the anonymous function denoted by the term @(n). The number of anonymous functions in an expressmn is understood to be equal
to the number of times the asymptotic notation appears. For example, in the ex pression Z 00') ,
i:l there is only a single anonymous function (a function of i). This expression is thus
not the same as 0(1) + 0(2) +   + 0(n), which doesn’t really have a clean interpretation.
In some cases. asymptotic notation appears on the lefthand side of an equation, as in
2n2 + em) = @012) . We interpret such equations using the following rule: No matter how the anony
mous functions are chosen on the left of the equal sign, there is a way to choose
the anonymous functions on the right of the equal sign to make the equation valid.
Thus, the meaning of our example is that for any function f (n) e @(n), there
is some function g(n) 6 @(n2) such that 2n2 + f (n) = g(n) for all n. In other
words, the righthand side of an equation provides a coarser level of detail than the lefthand side.
A number of such relationships can be chained together, as in an + 3n +1 = 2n2 +®(n)
@012). We can interpret each equation separately by the rule above. The ﬁrst equation says
that there is some function f (n) E @(n) such that 2H2 + 3n + l = 2n2 + f (n) for
all n. The second equation says that for any function g(n) 6‘ @(n) (such as the f (n)
just mentioned), there is some function Mn) 6 @(n2) such that 2n: + g(n) = h(n) for all n. Note that this interpretation implies that 2n2 + 3n + 1 = @(nz), which is
what the chaining of equations intuitively gives us. onotation The asymptotic upper bound provided by O—notation may or may not be asymp totically tight. The bound 2112 = 0(n2) is asymptotically tight, but the bound.
2n = 0(n2) is not. We use o—notation to denote an upper bound that is not asymp totically tight. We formally deﬁne 0(g (n)) (“littleoh of g of n”) as the set 48 Chapter 3 Growth of Functions 0(g(n)) = {f (n) : for any positive constant c > 0, there exists a constant
no > 0 such that D 5 f(n) < cg(n) for all n 2 no}. For example, 2n = 0(n2), but an 75 0012). The deﬁnitions of Onotation and onotation are similar. The main difference is
that in f (n) = 0(g(n)), the bound 0 S f (n) g cg(n) holds for some constant
c > 0, but in f(n) = 0(g(n)), the bound 0 5 f(n) < cg(n) holds for all con—
stants c > 0. Intuitively, in the onotation, the function f (n) becomes insigniﬁcant
relative to g(n) as n approaches inﬁnity; that is, . f(n) _
"1—1320 g(n) _ Some authors use this limit as a deﬁnition of the o—notation; the deﬁnition in this
book also restricts the anonymous functions to be asymptotically nonnegative. 0 ' (3.1) wnotation By analogy, wnotation is to Qnotation as onotation is to Onotation. We use
wnotation to denote a lower bound that is not asymptotically tight. One way to
deﬁne it is by f(n) e a)(g(n)) if and only ifg(n) e 0(f(n)) .
Formally, however, we deﬁne w(g(n)) (“littleomega of g of n”) as the set a)(g(n)) = [ f (n) : for any positive constant c > 0, there exists a constant
no > 0 such that 0 5 cg(n) '< f(n) for all n 2 no} ' For example, 112/2 = w(n), but 712/2 ;E (0012). The relation f(n) = w(g(n)) implies that . f (’1)
ll ‘ =
11—920 g(n) ’ if the limit exists. That is, f (It) becomes arbitrarily large relative to g(n) as n
approaches inﬁnity.
Comparison of functions Many of the relational properties of real numbers apply to asymptotic comparisons
as well. For the following, assume that f (n) and g(n) are asymptotically positive. 3] Asymptotic notation 49 Transitivity: f(n) = ®(g(n)) and g(n) = ®(h(n)) imply f(n) = 90202)),
f(n) = 0(g(n)) and g(n) = 0(h(n)) imply f0!) = 0(h(n)),
f(n) = 96:01)) and g(n) = 520101)) imply f(n) = 90201)),
f(n) = 0(g(n)) and g(n) = 0(h(n)) imply f0!) = 0(h(n)),
f (n) = w(g(n)) and g(n) = (00101)) imply f0!) = w(h(n)). Reﬂexivity: f(n) = ®(f(n)) ,
f0!) = 0(f(n)) ,
f(rl) = 9(f(n))  Symmetry:
f(n) = @(g(n)) if and only if g(n) = (“)(f(n)) 
Transpose symmetry: f0!) = 0(g(n)) if and only if g(n) = 52(f(n)),
f0!) =.0(g(n)) if and onlyif g(n) = w(f(n)).
Because these propenies hold for asymptotic notations, one can draw an analogy between the asymptotic comparison of two functions f and g and the comparison
of two real numbers a and b: f(n) = 0(g(n)) w a 5 b,
1%) = 9(g(n)) % a i b ,
f0!) = ®(g(n)) % a = b ,
f(n) = mm» W a < b ,
f(n) = w(g(n)) % a > b We say that f (n) is asymptotically smaller than g(n) if f (n) = 0(g (21)), and f (n)
is asymptotically larger than g(n) if f (n) = w(g(n)). One property of real numbers, however, does not carry over to asymptotic nota
tioni Trichotomy: For any two real numbers a and b, exactly one of the following must
hold: a < b,a = b,ora > b. Chapter 3 Growth of F unctions Although any two real numbers can be compared, not all functions are asymptoti
cally comparable. That is, for two functions f (n) and g(n), it may be the case that
neither f (n) = 0(g(n)) nor f (n) = S2(g(n)) holds. For example, the functions It
and nHSin" cannot be compared using asymptotic notation, since the value of the
exponent in nl‘mn" oscillates between 0 and 2, taking on all values in between. Exercises 3.11
Let f (n) and g(n) be asymptotically nonnegative functions. Using the basic deﬁ—
nition of @notation, prove that max(f(n), g(n)) = @(f (n) + g(n)). 3.12 Show that for any real constants a and b, where b > 0, (n + a)b = @(nb) . (3.2)
3.13 Explain why the statement, “The running time of algorithm A is at least 0 (112),” is
meaningless. 3.14
Is 2"+1 = 0(2")?1322" = 0(2")? 3.1 5
Prove Theorem 3.1. 3 .1 6
Prove that the running time of an algorithm is 8 (g(n)) if and only if its worstcase
running time is 0(g(n)) and its best—case running time is 8‘2 (g(n)). 3.1‘7
Prove that 0(g(n)) ﬂ w(g(n)) is the empty set. 3.18 We can extend our notation to the case of two parameters It and m that can go to
inﬁnity independently at different rates. For a given function g(n, m), we denote
by 0 (g (n, m)) the set of functions 0(g(n, m)) = {f (n, m) : there exist positive constants c, no, and me
such that 0 5 f(n, m) S cg(n, m)
foralln 2 noandm 2 m0}. Give corresponding deﬁnitions for $2 (g(n, m)) and @(g(n, m)). ...
View
Full Document
 Spring '08
 SteffenHeber

Click to edit the document details