fit into the memory cells of actual machines.
Thus, whenever we speak of an algorithm, we shall mean an algorithm that can be implemented
on a RAM, such that all numbers stored in memory cells are “small” numbers, as discussed above.
Admittedly, this is a bit imprecise. For the reader who demands more precision, we can make a
restriction, such as the following: after the execution of
m
steps, all numbers stored in memory
cells are bounded by
m
c
+
d
in absolute value, for constants
c
and
d
— in making this formal
requirement, we assume that the value
m
includes the number of memory cells of the input.
Even with these caveats and restrictions, the running time as we have defined it for a RAM
is still only a rough predictor of performance on an actual machine. On a real machine, different
instructions may take significantly different amounts of time to execute; for example, a division
instruction may take much longer than an addition instruction.
Also, on a real machine, the
behavior of the cache may significantly affect the time it takes to load or store the operands of an
instruction. However, despite all of these problems, it still turns out that measuring the running
time on a RAM as we propose here is nevertheless a good “first order” predictor of performance
on real machines in many cases.
If we have an algorithm for solving a certain class of problems, we expect that “larger” instances
of the problem will require more time to solve that “smaller” instances.
Theoretical computer
scientists sometimes equate the notion of an “efficient” algorithm with that of a “polynomialtime”
algorithm (although not everyone takes theoretical computer scientists very seriously, especially
on this point). A polynomialtime algorithm is one whose running time on inputs of length
n
is
bounded by
n
c
+
d
for some constants
c
and
d
(a “real” theoretical computer scientist will write
this as
n
O
(1)
). To make this notion mathematically precise, one needs to define the
length
of an
algorithm’s input.
To define the length of an input, one chooses a “reasonable” scheme to encode all possible inputs
as a string of symbols from some finite alphabet, and then defines the length of an input as the
number of symbols in its encoding.
We will be dealing with algorithms whose inputs consist of arbitrary integers, or lists of such
integers. We describe a possible encoding scheme using the alphabet consisting of the six symbols
‘0’, ‘1’, ‘’, ‘,’, ‘(’, and ‘)’. An integer is encoded in binary, with possibly a negative sign. Thus, the
length of an integer
x
is approximately equal to log
2

x

. We can encode a list of integers
x
1
, . . . , x
n
of numbers as “(¯
x
1
, . . . ,
¯
x
n
)”, where ¯
x
i
is the encoding of
x
i
. We can also encode lists of lists, etc.,
in the obvious way. All of the mathematical objects we shall wish to compute with can be encoded
in this way.
For example, to encode an
n
×
n
matrix of rational numbers, we may encode each
rational number as a pair of integers (the numerator and denominator), each row of the matrix as