Notes from class 13:
Watching the team contests has finally made me understand the role of the word "consider" in proofs and why I keep encouraging it.
It is very important in written and oral presentations to keep in mind the role of the audience. If I
Today we will talk about the ellipsoid algorithm  a divide+conquer algorithm that will optimize any convex function in high dimension in polynomial time. This algorithm is not used in practice because each stage of the divide+conquer approach requires ex
Last class we introduced our first two optimization algorithms, gradient descent and binary search, in the context of minimizing a function in 1 dimension. In this class we will move to two dimensional optimization, looking at variants of binary search to
In this class we will discuss and write code for optimization of 1dimensional functions, whichdespite seeming easywill give us a firm foundation of intuition and code we will build on when we move to higher dimensions next.

Optimization in 1 dimensio
In this class we wrap up our discussion of NPhard problems by contrasting NPhardness to a rather different style of hardness guarantee, and then start the new topic of optimization, which will last us several weeks in different forms.

Hard instances o
Notes from class 20
This week we will discuss information compression. For some computational tasks, arithmetic is the bottleneck, but in many other cases storing and transferring data is the most expensive part. Information compression lets us trade off
Notes from class 22
In this class we will cover 3 topics: we will see a graph theory algorithm for constructing a "minimum spanning tree", which will give us another different interesting example of a greedy algorithm and the proof strategy for greedy alg
This week we are going through lecture slides produced in association with the KleinbergTardos algorithms textbook (which we are not using, but which covers this week's topics very well).
Monday: http:/www.cs.princeton.edu/~wayne/kleinbergtardos/pdf/08I
We have just discussed hash tables, and are now going to cover "augmented selfbalancing binary search trees". In contrast to hash tables, which store their data in essentially random locations, binary search trees store their data in a much more structur
Notes from class 17:
In this class we discuss how to implement hash tables, and specifically, how to cope with collisions. We will discuss 6 overlapping strategies for how to organize our hash table in response to collisions, and discuss how choosing a un
Notes from class 15:
In this class we finish up our discussion of "map of the computer" and competitive analysis, and move on to talk about hashing, starting a larger unit on "how to store your data".

Cache oblivious matrix multiplication:
Last class wh
Notes from class 16:
Today we discuss hash functions, and the tools needed to think about them. A hash function is just a part of a hash table implementation, and we will discuss the rest next class; but for the moment fill in the blanks with whatever you
Notes from class 14:
We have been discussing how to think about writing effective algorithms in practice, with emphasis on paying attention to the "map of the computer"  thinking about each component and connection in terms of bandwidth and latency  and
Today we will wrap up our discussion of optimization, so that you are prepared for the optimization homework. We will first discuss gradient descent in high dimensions (recall that we discussed gradient descent in 1 dimension 3 classes ago), along with ho