Lecture 8. Paradigm #6 Dynamic Programming
Popularized by Richard Bellman ("Dynamic
Programming", Princeton University Press,
1957; call number QA 264.B36). Chapter 15 of
CLRS.
Typically, dynamic programming reduces the
complexity of a problem from 2n t
Lecture 21 NP-complete problems
Why do we care about NP-complete problems?
Because if we wish to solve the P=NP problem, we need to
deal with the hardest problems in NP.
Why do we want to solve the P=NP problem?
Because it will solve other 3000 NPC pro
Lecture 22 More NPC problems
Today, we prove three problems to be NP-
complete
3CNF-SAT
Clique problem
Vertex cover
Dick Karp
3-CNF SAT is NP-complete
A boolean formula is in 3-conjunctive normal
form (3-CNF) if it consists of clauses
connected by AND
Lecture 23. Subset Sum is NPC
The Subset Sum Problem: Given a set of
positive integers S and a "target" t. Question: Is
there a subset S' of S such that the sum of the
elements of S' is equal to t ?
For example, if S =
cfw_35,54,384,520,530,672,831,935
Lecture 24 Coping with NPC and
Unsolvable problems.
When a problem is unsolvable, that's generally very bad
news: it means there is no general algorithm that is
guaranteed to solve the problem in all cases.
However, it's rare that we actually want the c
CS341, Winter, 2011
Algorithms
Instructor: Ming Li
David R. Cheriton School of Computer Science
University of Waterloo
http:/www.cs.uwaterloo.ca/~cs341/
The last century has witnessed the
development of a beautiful and elegant new
scientific field: the de
Lecture 2
We have given O(n3), O(n2), O(nlogn) algorithms for the max sub-range problem. This
time, a linear time algorithm!
The idea is as follows: suppose we have found the maximum subrange sum for
x[1.n-1]. Now we have to find it for x[1.n]. There are
Lecture 3: Reduce to known problem
Convex Hull (section 33.3 of CLRS). Suppose we have a bunch of
points in the plane, given by their x and y coordinates. The convex
hull is the "smallest" convex subset that contains all the points. Here
"convex" means t
Lecture 4. Paradigm #2 Recursion
Last time we discussed Fibonacci numbers F(n), and Alg
fib(n)
if (n <= 1) then return(n)
else return(fib(n-1)+fib(n-2)
The problem with this algorithm is that it is woefully
inefficient. Let T(n) denote the number of st
Lecture 5
Today, how to solve recurrences
We learned guess and proved by induction
We also learned substitution method
Today, we learn the master theorem
More divide and conquer:
closest pair problem
matrix multiplication
Master Theorem
Theorem 4.1 (
Lecture 6 More Divide-Conquer
and Paradigm #4 Data Structure.
Today, we do more divide-and-conquer.
And, we do Algorithm design paradigm # 4:
invent or augment a data structure.
More divide and conquer:
powering a number
Problem: Compute an , where n i
Lecture 20. Computational Complexity
So far in this course we have discussed two
sorts of problems: problems that are efficiently
solvable, like shortest paths or matrix
multiplication, and problems that are unsolvable.
In between the two, however, are p
Lecture 19. Reduction: More
Undecidable problems
It turns out that many problems are undecidable.
In fact, many problems are even harder than being
undecidable. You can actually have an infinite
hierarchy of undecidable problems - each level
contains pr
Lecture 9. Dynamic Programming:
optimal coin change
We have seen this problem before: you are given an
amount in cents, and you want to make change using a
system of denominations, using the smallest number of
coins possible. Sometimes the greedy algorit
Lecture 10. Paradigm #8:
Randomized Algorithms
Back to the majority problem (finding the
majority element in an array A).
FIND-MAJORITY(A, n)
while (true) do
randomly choose 1 I n;
if A[i] is the majority then
return (A[i]);
If there is a majority elem
Lecture 11 Deterministic Selection
The goal is to determine the i'th smallest element from a list of n
elements in linear time. No random numbers are used.
The algorithm is due to Blum, Floyd, Pratt, Rivest, and Tarjan
(1973). The idea is the same as RA
Lecture 12: Lower bounds
By "lower bounds" here we mean a lower bound on the complexity
of a problem, not an algorithm. Basically we need to prove that no
algorithm, no matter how complicated or clever, can do better than
our bound. This is often hard!
Lecture 13: Adversary Arguments
continues
This lecture, we continue with the adversary
argument by giving more examples.
Example 4. Largest and Second Largest
Problem: given a list of n elements, find the
2nd largest. [Reference: S. Baase, Computer Alg
Lecture 15. Graph Algorithms
An undirected graph G is a pair (V,E), where V is a finite
set of points called vertices and E is a finite set of edges.
An edge e E is an unordered pair (u,v), where u,v
V.
In a directed graph, the edge e is an ordered pai
Lecture 16. Shortest Path Algorithms
The single-source shortest path problem is the following: given a
source vertex s, and a sink vertex v, we'd like to find the shortest
path from s to v. Here shortest path means a sequence of directed
edges from s to
Lecture 17 Path Algebra
Matrix multiplication of adjacency matrices of directed
graphs give important information about the graphs.
Manipulating these matrices to study graphs is path
algebra.
With "path algebra" we can solve the following problems:
Co
Lecture 18. Unsolvability
Kurt Gdel
Before the 1930s, mathematics was not like today.
Then people believed that everything true must be
provable. (More formally, in a powerful enough
mathematical system, a true statement should be
provable as a theorem.
Lecture 7 Paradigm #5 Greedy Algorithms
Ref. CLRS, Chap. 16
Example 1 (Making change) Suppose you buy
something of cost less than $5, and you get your
change in Canadian coins. How can you
minimize the number of coins returned to you?
Formally: Given x