Lecture Notes
CMSC 251
high school algebra. If
c
is a constant (does not depend on the summation index
i
) then
n
X
i
=1
ca
i
=
c
n
X
i
=1
a
i
and
n
X
i
=1
(
a
i
+
b
i
)=
n
X
i
=1
a
i
+
n
X
i
=1
b
i
.
There are some particularly important summations, which you should probably commit to memory (or
at least remember their asymptotic growth rates). If you want some practice with induction, the ﬁrst
two are easy to prove by induction.
Arithmetic Series:
For
n
≥
0
,
n
X
i
=1
i
=1+2+
···
+
n
=
n
(
n
+1)
2
=Θ(
n
2
)
.
Geometric Series:
Let
x
6
=1
be any constant (independent of
i
), then for
n
≥
0
,
n
X
i
=0
x
i
=1+
x
+
x
2
+
+
x
n
=
x
n
+1

1
x

1
.
If
0
<x<
1
then this is
Θ(1)
, and if
x>
1
, then this is
Θ(
x
n
)
.
Harmonic Series:
This arises often in probabilistic analyses of algorithms. For
n
≥
0
,
H
n
=
n
X
i
=1
1
i
1
2
+
1
3
+
+
1
n
≈
ln
n
= Θ(ln
n
)
.
Lecture 3: Summations and Analyzing Programs with Loops
(Tuesday, Feb 3, 1998)
Read:
Chapt. 3 in CLR.
Recap:
Last time we presented an algorithm for the 2dimensional maxima problem. Recall that the algo
rithm consisted of two nested loops. It looked something like this:
Brute Force Maxima
Maxima(int n, Point P[1.
.n]) {
f
o
ri=1t
on{
...
for j = 1 to n {
...
...
}
}
We were interested in measuring the worstcase running time of this algorithm as a function of the
input size,
n
. The stuff in the “.
..” has been omitted because it is unimportant for the analysis.
Last time we counted the number of times that the algorithm accessed a coordinate of any point. (This
was only one of many things that we could have chosen to count.) We showed that as a function of
n
in the worst case this quantity was
T
(
n
)=4
n
2
+2
n.
7
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentLecture Notes
CMSC 251
We were most interested in the growth rate for large values of
n
(since almost all algorithms run fast
for small values of
n
), so we were most interested in the
4
n
2
term, which determines how the function
grows asymptotically for large
n
. Also, we do not care about constant factors (because we wanted
simplicity and machine independence, and ﬁgured that the constant factors were better measured by
implementing the algorithm). So we can ignored the factor 4 and simply say that the algorithm’s
worstcase running time grows asymptotically as
n
2
, which we wrote as
Θ(
n
2
)
.
In this and the next lecture we will consider the questions of (1) how is it that one goes about analyzing
the running time of an algorithm as function such as
T
(
n
)
above, and (2) how does one arrive at a
simple asymptotic expression for that running time.
A Harder Example:
Let’s consider another example. Again, we will ignore stuff that takes constant time
(expressed as “.
..” inthe code below).
A NotSoSimple Example:
for i = 1 to n {
// assume that n is input size
...
for j = 1 to 2*i {
...
k=j
;
while (k >= 0) {
...
k=k1
;
}
}
}
How do we analyze the running time of an algorithm that has many complex nested loops? The
answer is that we write out the loops as summations, and then try to solve the summations. Let
I
()
,
M
()
,
T
()
be the running times for (one full execution of) the inner loop, middle loop, and the entire
program. To convert the loops into summations, we work from the insideout. Let’s consider one pass
through the innermost loop. The number of passes through the loop depends on
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '11
 Staff
 Mathematical Induction, Recursion, Big O notation, Asymptotic analysis, Ih

Click to edit the document details