Lecture Notes
CMSC 251
Lecture 5: Asymptotics
(Tuesday, Feb 10, 1998)
Read:
Chapt. 3 in CLR. The Limit Rule is not really covered in the text. Read Chapt. 4 for next time.
Asymptotics:
We have introduced the notion of
Θ()
notation, and last time we gave a formal definition.
Today, we will explore this and other asymptotic notations in greater depth, and hopefully give a better
understanding of what they mean.
Θ
Notation:
Recall the following definition from last time.
Definition:
Given any function
g
(
n
)
, we define
Θ(
g
(
n
))
to be a set of functions:
Θ(
g
(
n
)) =
{
f
(
n
)

there exist strictly positive constants
c
1
,
c
2
, and
n
0
such that
0
≤
c
1
g
(
n
)
≤
f
(
n
)
≤
c
2
g
(
n
)
for all
n
≥
n
0
}
.
Let’s dissect this definition. Intuitively, what we want to say with “
f
(
n
)
∈
Θ(
g
(
n
))
” is that
f
(
n
)
and
g
(
n
)
are
asymptotically equivalent
. This means that they have essentially the same growth rates for
large
n
. For example, functions like
4
n
2
,
(8
n
2
+2
n

3)
,
(
n
2
/
5+
√
n

10 log
n
)
, and
n
(
n

3)
are all
intuitively asymptotically equivalent, since as
n
becomes large, the dominant (fastest growing) term is
some constant times
n
2
. In other words, they all grow
quadratically
in
n
. The portion of the definition
that allows us to select
c
1
and
c
2
is essentially saying “the constants do not matter because you may
pick
c
1
and
c
2
however you like to satisfy these conditions.” The portion of the definition that allows
us to select
n
0
is essentially saying “we are only interested in large
n
, since you only have to satisfy
the condition for all
n
bigger than
n
0
, and you may make
n
0
as big a constant as you like.”
An example:
Consider the function
f
(
n
) = 8
n
2
+ 2
n

3
. Our informal rule of keeping the largest term
and throwing away the constants suggests that
f
(
n
)
∈
Θ(
n
2
)
(since
f
grows quadratically). Let’s see
why the formal definition bears out this informal observation.
We need to show two things: first, that
f
(
n
)
does grows asymptotically at least as fast as
n
2
, and
second, that
f
(
n
)
grows no faster asymptotically than
n
2
. We’ll do both very carefully.
Lower bound:
f
(
n
)
grows asymptotically at least as fast as
n
2
: This is established by the portion
of the definition that reads: (paraphrasing): “there exist positive constants
c
1
and
n
0
, such that
f
(
n
)
≥
c
1
n
2
for all
n
≥
n
0
.” Consider the following (almost correct) reasoning:
f
(
n
) = 8
n
2
+ 2
n

3
≥
8
n
2

3 = 7
n
2
+ (
n
2

3)
≥
7
n
2
= 7
n
2
.
Thus, if we set
c
1
= 7
, then we are done. But in the above reasoning we have implicitly made
the assumptions that
2
n
≥
0
and
n
2

3
≥
0
. These are not true for all
n
, but they are true for all
sufficiently large
n
. In particular, if
n
≥
√
3
, then both are true. So let us select
n
0
≥
√
3
, and
now we have
f
(
n
)
≥
c
1
n
2
, for all
n
≥
n
0
, which is what we need.
Upper bound:
f
(
n
)
grows asymptotically no faster than
n
2
: This is established by the portion of
the definition that reads “there exist positive constants
c
2
and
n
0
, such that
f
(
n
)
≤
c
2
n
2
for all
n
≥
n
0
.” Consider the following reasoning (which is almost correct):
f
(
n
) = 8
n
2
+ 2
n

3
≤
8
n
2
+ 2
n
≤
8
n
2
+ 2
n
2
= 10
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '11
 Staff
 lim, Big O notation, Analysis of algorithms, Asymptotic analysis, positive constants, limit rule

Click to edit the document details