This preview shows page 1. Sign up to view the full content.
Unformatted text preview: CSE 12:
Basic data structures and
objectoriented design
Jacob Whitehill
jake@mplab.ucsd.edu Lecture Ate
11 Aug 2011
Thursday, August 11, 2011 More on performance
analysis. Thursday, August 11, 2011 Asymptotic performance analysis
• Asymptotic performance analysis is a coarse but useful
means of describing and comparing the performance of
algorithms as a function of the input size n when n gets
large. • Asymptotic analysis applies to both time cost and
space cost. • Asymptotic analysis hides details of timing (that we don’t
care about) due to: •
•
•
Thursday, August 11, 2011 Speed of computer.
Slight differences in implementation.
Programming language. O, Ω, and θ • In order to justify describing the time cost T(n)
=3n+4 as just “linear” (n), we ﬁrst need some
mathematical machinery: • We deﬁne a lower bound on T with Ω.
• We deﬁne an upper bound on T with O.
• We deﬁne a tight bound (bounded above and
below) on T with θ. • θ is important because it is more speciﬁc
than O.
(For example, technically, 3n+4=O(2n).) Thursday, August 11, 2011 Abuse of notation
• When we say that 3n+5 is “linear in n”, what we really mean (mathematically) is that 3n+5 is
θ(n). • Note: In computer science, we often say O where we really mean θ. This is a slight abuse of
notation. • We will use O in this course to mean θ.
Thursday, August 11, 2011 Asymptotic performance analysis
• Asymptotic analysis assigns algorithms to different
“complexity classes”: • • O(1)  constant  performance of algorithm does
not depend on input size. •
•
•
• O(n)  linear  doubling n will double the time cost.
O(log n)  logarithmic
O(n2)  quadratic
n)
O(2  exponential Algorithms that differ in complexity class can have
vastly different runtime performance (for large n). Thursday, August 11, 2011 Analysis of data structures
• Let’s put these ideas into practice and analyze the
performance of algorithms related to ArrayList: • add(o), get(index), find(o), and remove
(index). • As a ﬁrst step, we must decide what the “input
size” means. •
Thursday, August 11, 2011 What is the “input” to these algorithms? Analysis of data structures
• Each of the methods (algorithms) above operates on the _underlyingStorage and either o or index. • and index are always length 1  their size
cannot grow.
o • However, the number of data in (stored in _numElements)
will grow as the user adds elements to the
ArrayList.
_underlyingStorage • Hence, we measure asymptotic time cost as a
function of n, the number of elements stored
(_numElements). Thursday, August 11, 2011 Adding to back of list
• What is the time complexity of this
method? class ArrayList<T> {
...
void addToBack (T o) {
// Assume _underlyingStorage is big enough
_underlyingStorage[_numElements] = o;
_numElements++;
}
} Thursday, August 11, 2011 Adding to back of list
• What is the time complexity of this
method? Note that, for this method, the
worst case, average case, and
best case are all the same. class ArrayList<T> {
...
void addToBack (T o) {
// Assume _underlyingStorage is big enough
_underlyingStorage[_numElements] = o;
_numElements++;
}
} O(1)  no matter how many elements the
list already contains, the cost is just 2
“abstract operations”. Thursday, August 11, 2011 Retrieving an element
• What is the time complexity of this
method? class ArrayList<T> {
...
T get (int index) {
return _underlyingStorage[index];
}
} Thursday, August 11, 2011 Retrieving an element
• What is the time complexity of this
method? class ArrayList<T> {
...
T get (int index) {
return _underlyingStorage[index];
}
} O(1). Thursday, August 11, 2011 Adding to front of list
• What is the time complexity of this
method? class ArrayList<T> {
...
void addToFront (T o) {
...
}
} Thursday, August 11, 2011 Adding to front of list
• What is the time complexity of this
method? class ArrayList<T> {
We have to move
...
everything over by 1.
void addToFront (T o) {
// Assume _underlyingStorage is big enough
for (int i = 0; i < _numElements; i++) {
_underlyingStorage[i+1] = _underlyingStorage[i];
}
_underlyingStorage[i] = o;
_numElements++;
}
} O(n).
Thursday, August 11, 2011 Finding an element
• What is the time complexity of this method in the best case? Worst case? class ArrayList<T> {
...
// Returns lowest index of o in the ArrayList, or
// 1 if o is not found.
int find (T o) {
for (int i = 0; i < _numElements; i++) {
if (_underlyingStorage[i].equals(o)) { // not null
return i;
}
}
return 1;
}
} Thursday, August 11, 2011 Finding an element
• What is the time complexity of this method in the best case? Worst case? class ArrayList<T> {
...
// Returns lowest index of o in the ArrayList, or
// 1 if o is not found.
int find (T o) {
for (int i = 0; i < _numElements; i++) {
if (_underlyingStorage[i].equals(o)) { // not null
return i;
}
}
return 1;
}
}
O(1) in best case; O(n) in worst case. Thursday, August 11, 2011 Adding n elements
• Now, let’s consider the time complexity of
doing many adds in sequence, starting from
an empty list: void addManyToFront (T many) {
for (int i = 0; i < many.length; i++) {
addToFront(many[i]);
}
} • What is the time complexity of
addManyToFront Thursday, August 11, 2011 on an array of size n? Adding n elements
• To calculate the total time cost, we have to sum the
time costs of the individual calls to addToFront. • Each call to addToFront(o) takes about time i, where
i is the current size of the list. (We have to “move
over” i elements by one step to the right.)
void addManyToFront (T many) {
for (int i = 0; i < many.length; i++) {
addToFront(many[i]);
}
} • Let T(i) the cost of addToFront at iteration i:
T(0)=1, T(1)=2, ..., T(n1)=n. Thursday, August 11, 2011 Adding n elements • Now we just have to add together all the T(i): • Note that we would get the same asymptotic bound
even if we calculated the cost T(i) slightly differently,
n− 1
n− 1
e.g., T(i)=3i+2: n− 1
n− 1
n(n − 1)
T ( i) =
i=
= O ( n2 )
2
i=0
i=0 T ( i) = i=0 (3i + 2) i=0 = n− 1
3i + i=0 =3 n− 1
i=0 n− 1
i + 2n i=0 =3 n(n − 1)
2 = O ( n2 )
Thursday, August 11, 2011 2 + 2n Finding an element
• What is the time complexity of this
method in the average case? class ArrayList<T> {
...
// Returns lowest index of o in the ArrayList, or
// 1 if o is not found.
int find (T o) {
for (int i = 0; i < _numElements; i++) {
if (_underlyingStorage[i].equals(o)) { // not null
return i;
}
}
return 1;
}
} Thursday, August 11, 2011 Finding an element: average case
• Finding an exact formula for the average case performance
can be tricky (if not impossible). • In order to compute the average, or expected, time cost,
we must know: •
•
• The time cost T(Xn) for a particular input X of size n.
The probability P(Xn) of that input X. The expected time cost, over all inputs X of size n, is then:
AvgCaseTimeCostn = E [T (Xn )] =
P ( Xn ) T ( Xn )
Xn Thursday, August 11, 2011 Finding an element: average case
• Finding an exact formula for the average case performance
can be tricky (if not impossible). • In order to compute the average, or expected, time cost,
we must know:
In this case, X consists of both the element o •
•
• and the contents of _underlyingStorage. The time cost T(Xn) for a particular input X of size n.
The probability P(Xn) of that input X. The expected time cost, over all inputs X of size n, is then:
AvgCaseTimeCostn = E [T (Xn )] =
P ( Xn ) T ( Xn )
“E” for
“Expectation” Thursday, August 11, 2011 Xn Sum the time costs for all
possible inputs, and weight each
cost by how likely it is to occur. Finding an element: average case
• In the find(o) method listed above, it is possible that
the user gives us an o that is not contained in the list. •
•
• This will result in O(n) time cost.
How “likely” is this event? • We have no way of knowing  we could make an
arbitrary assumption, but the result would be
meaningless. Let’s remove this case from consideration and assume
that o is always present in the list. •
Thursday, August 11, 2011 What is the averagecase time cost then? Finding an element: average case
• Even when we assume o is present in the list
somewhere, we have no idea whether the o the user
gives us will “tend to be at the front” or “tend to be at
the back” of the list. • However, here we can make a plausible assumption: • For an ArrayList of n elements, the probability that
o is contained at index i is 1/n. •
Thursday, August 11, 2011 In other words, o is equally likely to be in any of
the “slots” of the array. Finding an element: average case
•
• Given this assumption, we can ﬁnally make headway.
Let’s deﬁne T(i) to be the cost of the find(o) method as a
function of i, the location in _underlyingStorage where o
is located. What is T(i)? class ArrayList<T> {
...
// Returns lowest index of o in the ArrayList, or
// 1 if o is not found.
int find (T o) {
for (int i = 0; i < _numElements; i++) {
if (_underlyingStorage[i].equals(o)) { // not null
return i;
}
}
return 1;
}
}
Thursday, August 11, 2011 Finding an element: average case
•
• Given this assumption, we can ﬁnally make headway.
Let’s deﬁne T(i) to be the cost of the find(o) method as a
function of i, the location in _underlyingStorage where o
is located. What is T(i)? class ArrayList<T> {
...
// Returns lowest index of o in the ArrayList, or
// 1 if o is not found.
int find (T o) {
for (int i = 0; i < _numElements; i++) {
if (_underlyingStorage[i].equals(o)) { // not null
return i;
}
}
return 1;
}
} T(i)=i Thursday, August 11, 2011 Finding an element: average case
• Now, we can rewrite the expected time cost in terms of
an arbitrary input X, as the expected time cost in terms of
where in the array the element o will be found.
AvgCaseTimeCostn = P ( i) T ( i) i =
=
=
=
=
Thursday, August 11, 2011 1
i
n
i
1
i
ni 1 n(n + 1)
n
2
n+1
2
O ( n) Redeﬁne P(Xn) and T(Xn) in
terms of P(i) and T(i).
Substitute terms. Move 1/n out of the summation. Formula for arithmetic series.
The n’s cancel.
Find asymptotic bound. Questions to ponder
• What is the time cost of adding to the back
of a singlylinked list, as a function of the
number of elements already in the list? • With just a _head pointer?
• With both _head and _tail?
• What if _head and _tail point to
dummy nodes? Thursday, August 11, 2011 More on performance
measurement. Thursday, August 11, 2011 Empirical performance
measurement
• As an alternative to describing an algorithm’s
performance with a “number of abstract
operations”, we can also measure its time
empirically using a clock. • As illustrated last lecture, counting “abstract operations” can anyway hide real performance
differences, e.g., between using int and
Integer. Thursday, August 11, 2011 • Empirical performance
measurement There are also many cases where you don’t know
how an algorithm works internally. • Many programs and libraries are not open source! • You have to analyze an algorithm’s performance
as a black box. •
• “Black box”  you can run the program but
cannot see how it works internally. It may even be useful to deduce the asymptotic time
cost by measuring the time cost for different input
sizes. Thursday, August 11, 2011 Procedure for measuring
time cost • Let’s suppose we wish to measure the time cost of
algorithm A as a function of its input size n. • We need to choose the set of values of n that we
will test. • If we make n too big, our algorithm A may never
terminate (the input is “too big”). • If we make n too small, then A may ﬁnish so fast
that the “elapsed time” is practically 0, and we
won’t get a reliable clock measurement. Thursday, August 11, 2011 Procedure for measuring
time cost
• In practice, one “guesses” a few values for n, sees how fast A executes on them, and selects a range of
values for n. array of different
• Let’s deﬁne{an1000, 2000, 3000,input sizes, e.g.:
int N =
..., 10000 }; • Now, for each input size N[i], we want to measure
A’s time cost. Thursday, August 11, 2011 Procedure for measuring
time cost
• Procedure (draft 1): Make sure to start and stop the clock
as “tightly” as possible around the
actual algorithm A. for (int i = 0; i < N.length; i++) {
final Object X = initializeInput(N[i]);
final long startTime = getClockTime();
A(X); // Run algorithm A on input X of size N[i]
final long endTime = getClockTime(); } final long elapsedTime = endTime  startTime;
System.out.println(“Time for N[“ + i + “]: “ +
elapsedTime); Thursday, August 11, 2011 •
• Procedure for measuring
time cost
The procedure would work ﬁne if there were no variability
in how long A(X) took to execute.
Unfortunately, in the “real world”, each measurement of
the time cost of A(X) is corrupted by noise: •
•
•
•
• Garbage collector!
Other programs running.
Cache locality.
Swapping to/from disk.
Input/output requests from external devices. Thursday, August 11, 2011 Procedure for measuring
time cost
• If we measured the time cost of A(X) based on just one
measurement, then our estimate of the “true” time cost
of A(X) will be very imprecise. • We might get unlucky and measure A(X) while the
computer is doing a “system update”. • If we’ve very unlucky, this might occur during some
values of i, but not for others, thereby skewing the
trend we seek to discover across the different N[i]. Thursday, August 11, 2011 Improved procedure for
measuring time cost
• A muchimproved procedure for measuring the time cost
of A(X) is to compute the average time across M trials. • Procedure (draft 2):
for (int i = 0; i < N.length; i++) {
final Object X = initializeInput(N[i]); } final long elapsedTimes = new long[M];
for (int j = 0; j < M; j++) {
final long startTime = getClockTime();
A(X); // Run algorithm A on input X of size N[i]
final long endTime = getClockTime();
elapsedTimes[j] = endTime  startTime;
}
final double avgElapsedTime = computeAvg(elapsedTimes);
System.out.println(“Time for N[“ + i + “]: “ +
avgElapsedTime); Thursday, August 11, 2011 Improved procedure for
measuring time cost
•
•
• If the elapsed time measured in the jth trial is Tj, then the
average over all M trials is:
M
1
T=
M Tj j =1 We will use the average time “Tbar” as an estimate of the
“true” time cost of A(X).
The more trials M we use to compute the average, the
more precise our estimate “Tbar” will be. Thursday, August 11, 2011 Improved procedure for
measuring time cost
•
• Alternatively, we can start/stop the clock just once.
Procedure (draft 2b):
for (int i = 0; i < N.length; i++) {
final Object X = initializeInput(N[i]);
final long startTime = getClockTime();
for (int j = 0; j < M; j++) {
A(X); // Run algorithm A on input X of size N[i]
}
final long endTime = getClockTime(); } final double avgElapsedTime = (endTime  startTime) / M;
System.out.println(“Time for N[“ + i + “]: “ +
avgElapsedTime); Thursday, August 11, 2011 Quantifying uncertainty
• A key issue in any experiment is to quantify the uncertainty of
all measurements. • Example: • We are attempting to estimate the “true” time cost of A(X)
by averaging together the results of many trials. • After computing “Tbar”, how far from the “true” time cost
of A(X) was our estimate? Thursday, August 11, 2011 Quantifying uncertainty
• A key issue in any experiment is to quantify the uncertainty of
all measurements. • Example: • You are attempting to estimate the “true” time cost of A(X)
by averaging together the results of many trials. • After computing “Tbar”, how far from the “true” time cost
of A(X) was your estimate? • In order to compute this, we would have to know what
the true time cost is  and that’s what we’re trying to
estimate! • We must ﬁnd another way to quantify uncertainty... Thursday, August 11, 2011 Standard error versus
standard deviation • Some of you may already be familiar with the standard deviation: • The standard deviation measures how “varied” the individual
measurements Tj are.
M
1
σ=
( Tj − T ) 2
M j =1 • The standard deviation gives a sense of “how much noise
there is.” • However, in most cases, we are less interested in
characterizing the noise, and more interested in measuring
the true time cost of A(X) itself. •
Thursday, August 11, 2011 For this, we want the standard error. • Quantifying your
uncertainty
In statistics, the uncertainty associated with a
measurement (e.g., the time cost of A(X)) is typically
quantiﬁed using the standard error:
σ
StdErr = √
M where Standard deviation
M
1
σ=
( Tj − T ) 2
M j =1 where “Tbar” is the average (computed on earlier
slide). • Notice: as M grows larger, the StdErr becomes
smaller. Thursday, August 11, 2011 Error bars
• The standard error is often used to compute error
bars on graphs to indicate how reliable they are. • Different error bars have different meanings! Some
of them indicate conﬁdence intervals, some indicate
standard error, some indicate standard deviation it’s important to know which! Thursday, August 11, 2011 Example
7
ArrayList
LinkedList
6 Time (sec) 5 4 3 2 1 0 Thursday, August 11, 2011 0 2 4 6
# data to add 8 10 12
4 x 10 Stacks and queues. Thursday, August 11, 2011 Stacks and queues.
• Let’s now bring in two more fundamental data structures
into the course. • So far we have covered lists  arraybased lists and linkedlists. •
• These are both linear data structures  each element in
the container has at most one successor and one
predecessor. Lists are most frequently used when we wish to store
objects in a container, and probably never remove them from it. • E.g., if Amazon uses a list to store its huge collection of
customers, it has no intention of “removing” a customer
(except at program termination). Thursday, August 11, 2011 Stacks and queues
• Stacks and queues, on the other hand, are examples of linear data structures in which every
object inserted into it will generally be removed: • The stack/queue is intended only as
“temporary” storage. • Both stacks and queues allow the user to add and
remove elements. • Where they differ is the order in which elements
are removed relative to when they were added. Thursday, August 11, 2011 Stacks
•
• • Stacks are lastinﬁrstout (LIFO) data structures.
The classic analogy for a “stack” is a pile of dishes: • Suppose you’ve already added dishes A, B, and C
to the “stack” of dishes. •
•
• Now you add one more, D.
Now you remove one dish  you get D back.
If you remove another, you get C, and so on. With stacks, you can only add to/remove from the
top of the stack. If you try to remove a middle dish, you get that annoying clanging sound.
Thursday, August 11, 2011 D
C
B
A Stacks
•
• Stacks ﬁnd many uses in computer science, e.g.: • Implementing procedure calls. Consider the following code:
void f () {
_num = 4;
g();
_num++;
}
How does the CPU know to “jump” from
void g () {
f to g, g to h, then h back to g, and ﬁnally
h();
g back to f?
_num = 7;
}
void h () {
System.out.println(“Yup!”);
} Thursday, August 11, 2011 Von Neumann machine
• On all modern machines, a program’s instructions and its
data are stored together somewhere in the computer’s
long sequence of bits (Von Neumann architecture). •
• Just by “glancing” at the contents of computer
memory, one would have no idea whether a certain
byte contains code or data  it’s all just bits. To keep track of which instruction in memory is
currently being executed, the CPU maintains an
Instruction Pointer (IP). Thursday, August 11, 2011 Code execution
0 Memory
4 4 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40
return; ... Thursday, August 11, 2011 f call g(); • g 12 IP _num=4; h 8 _num Address • Suppose the IP is 8: • Then the next instruction
to execute is _num=4; The CPU then advances the
IP to the next instruction (4
bytes later) to 12. Code execution
0 Memory
4 4
call g(); 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 g 12 f _num=4; h 8 _num Address IP • The next instruction is
call g(). • The CPU must now
“move” the IP to address
24 (start of g’s code) so g
can start. Code execution
0 Memory
4 4
_num=4; 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 • f call g(); IP
g 12 h 8 _num Address g has now started. • The ﬁrst thing g does is
call h. • We have to move the IP
again. Code execution
0 Memory
4 4
call g(); 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 g 12 f _num=4; h 8 _num Address IP • h now prints out “yup!”. Code execution
0 Memory
4 4
_num=4; 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 • f call g(); The return instructions
tells the CPU to move
the IP back to where it
was before the current
method was called. • But where is that? g 12 h 8 _num Address IP Code execution
0 Memory
4 4
call g(); 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 • g 12 f _num=4; h 8 _num Address IP The return call at address
40 should cause the CPU
to jump to address 28 the next instruction in g. Code execution
0 Memory
7 4
call g(); 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 g 12 f _num=4; h 8 _num Address IP • We then execute
_num=7; Code execution
0 Memory
7 4
call g(); 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 •
g 12 f _num=4; IP
h 8 _num Address And now we have to
return to where the caller
of g left off (address 16). Code execution
0 Memory
7 4
call g(); 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 g 12 f _num=4; h 8 _num Address •
IP How does the CPU know which
address to “return” to? • We need some kind of data
structure to manage the “return
addresses” for us. Code execution
0 Memory
7 4
call g(); 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; What we need is a lastinﬁrstout data structure (“stack”) to
remember all the return
addresses: IP • g 12 f _num=4; • h 8 _num Address • ... •
Thursday, August 11, 2011 Rule 1: Before method x calls
method y, method x ﬁrst adds
its “return address” to the
stack.
Rule 2: When method y
“returns” to its caller, it
removes the top of the stack
and sets the IP to that address. Let’s see this work in practice... Code execution
0 Memory
4 4 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 f call g(); g 12 “Return address” stack: IP _num=4; h 8 • _num Address (bottom of stack) Code execution
0 Memory
4 4
call g(); 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 IP “push” 16 onto stack g 12 f _num=4; “Return address” stack: 16 h 8 • _num Address (bottom of stack) Code execution
0 Memory
4
4
_num=4; 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 f call g(); “push” 28 onto stack g 12 “Return address” stack: 28
16 h 8 • _num Address (bottom of stack) IP Code execution
0 Memory
4 4 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 f call g(); 28
16 g 12 “Return address” stack: _num=4; h 8 • _num Address IP (bottom of stack) Code execution
0 Memory
4 4
_num=4; 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 f call g(); “pop” 28 off the stack... g 12 “Return address” stack: 28
16 h 8 • _num Address (bottom of stack) IP Code execution
0 Memory
7 4
_num=4; 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 f call g(); g 12 “Return address” stack:
...and jump to that address. h 8 • _num Address IP 16
(bottom of stack) Code execution
0 Memory
7 4
_num=4; 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 f call g(); g 12 “Return address” stack:
“pop” 16 off the stack... h 8 • _num Address IP 16
(bottom of stack) Code execution
0 Memory
7 4
_num=4; 16 _num++; 20 return; 24 call h(); 28 _num = 7; 32 return; 36 ...println(“yup!”); 40 return; ... Thursday, August 11, 2011 f call g(); IP g 12 “Return address” stack:
...and jump to that address. h 8 • _num Address (bottom of stack) Stack ADT
• To support the lastinﬁrstout adding/removal of
elements, a stack must adhere to the following interface:
interface Stack<T> {
// Adds the specified object to the top of the stack.
void push (T o);
// Removes the top of the stack and returns it.
T pop ();
// Returns the top of the stack without removing it.
T peek ();
} Thursday, August 11, 2011 • • Stack ADT Similarly to a list, a stack can be implemented
straightforwardly using two kinds of backing stores: •
• Linked list
Array Think about how these would work... • In the case of linked list, our StackImpl class might
start out like:
class StackImpl<T> {
DoublyLinkedList12<T> _underlyingStorage;
} Thursday, August 11, 2011 ...
View
Full
Document
This note was uploaded on 11/02/2011 for the course CSE 12 taught by Professor Gary during the Summer '08 term at UCSD.
 Summer '08
 Gary
 Data Structures

Click to edit the document details