* Separate Interface from Implementation
* Pure virtual functions/abstract base classes.
* Structural invariants:
must always be true on any method exit
* The "big instance" problem
* Global/local object lifetimes
* A new arena: the "heap".
So far, the data structures we've *built* have all had room for "at
most N" elements.
* the boards in the sorry simulation could have at most 60
* the various IntSet implementations could have at most 100
* the shoe in the blackjack game has room for only one (52-card)
Granted, we could have extended these sizes to larger ones, but no
matter what we do, so far we only know how to create "static,
fixed-sized" structures---we had to declare how big these things could
possibly get, and (in the case of IntSet) write code to handle the
Sometimes, the process you are modeling has a physical limit, which
makes a static, fixed-sized structure a reasonable choice.
example, a deck of cards has 52 individual cards in it (not counting
jokers), so this is a reasonable limitation.
On the other hand, there is no meaningful sense in which a "set of
integers" is limited to some particular number of elements.
matter how big you make the set's capacity, an application that needs
more will eventually come along.
Technically, this is not true.
If an integer is represented in K
bits, there cannot be more than 2^K-1 distinct integers, giving us
a maximum set size.
However if you declared an array of bool to
hold the "largest possible" set of integers on a 32 bit machine,
it would 512 MB---and machines typically sell with about 2-4x that
much memory, total.
For example, consider the list_t type from the second project.
type imposed no limits on how large a list could grow.
Of course, the machine would eventually run out of space to store
lists, which would cause the program to fail.
But, if you run the