This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 62 THE PATTERN ON THE STONE uniqueness. At the root of these earlier philosophical crises
was a misplaced judgment of the source of human worth. I
am convinced that most of the current philosophical discus—
sions about the limits of computers are based on a similar misjudgment. TUBING MACHINES  .uoononcuu The central idea in the theory of computation is that of a uni
versal computer—that is, a computer powerful enough to sim—
ulate any other computing device. The generalpurpose
computer described in the preceding chapters is an ex
ample of a universal computer; in fact most computers we
encounter in everyday life are universal computers. With the
right software and enough time and memory, any universal
computer can simulate any other type of computer, or {as far as
we know) any other device at all that processes information. One consequence of this principle of universality is that
the only important difference in power between two comput
ers is their speed and the size of their memory. Computers
may differ in the kinds of input and output devices connected
to them, but these socalled peripherals are not essential char
acteristics of a computer, any more than its size or its cost or
the color of its case. In terms of what they are able to do, all
computers (and all other types of universal computing
devices) are fundamentally identical. The idea of a universal computer was recognized and
described in 1937 by the British mathematician Alan Turing.
Turing, like so many other computing pioneers, was interested
in the problem of making a machine that could think, and he
invented a scheme for a generalpurpose computing machine.
Turing referred to his imaginary construct as a “universal
machine," since at that time the word “computer” still meant
“a person who performs computations.” HOW UNIVERSAL ARE TURING MACHINES? To picture a Turing machine, imagine a mathematician
performing calculations on a scroll of paper. Imagine further
that the scroll is inﬁnitely long, so that we don't need to
worry about running out of places to write things down. The
mathematician will be able to solve any solvable computa»
tional problem no matter how many operations are involved,
although it may take him an inordinate amount of time. Tur
ing showed that any calculation that can be performed by a
smart mathematician can also be performed by a stupid but
meticulous clerk who follows a simple set of rules for read
ing and writing the information on the scroll. In fact. he
showed that the human clerk can be replaced by a finite
state machine. The ﬁnitestate machine looks at only one
symbol on the scroll at a time, so the scroll is best thought of
as a narrow paper tape, with a single symbol on each line. Today, we call the combination of a ﬁnitestate machine
with an inﬁnitely long tape a Turing machine. The tape of a
Turing machine is analogous to, and serves much the same
function as, the memory of a modern computer. All that the
ﬁnitestate machine does is read or write a symbol on the
tape and move back and forth according to a ﬁxed and sim~
ple set of rules. Turing showed that any computable problem
could be solved by writing symbols on the tape of a Turing
machinchsymbols that would specify not just the problem
but also the method of solving it. The Turing machine com~
putes the answer by moving back and forth across the tape,
reading and writing symbols, until the solution is written on
the tape. I ﬁnd Turing’s particular construction difﬁcult to think
about. To me, the conventional computer, which has a mem
ory instead of a tape, is a more comprehensible example of a
universal machine. For instance, it is easier for me to see how
a conventional computer can be programmed to simulate a
Turing machine than vice versa. What is amazing to me is not
so much 'I‘uring’s imaginary construct but his hypothesis that
there is only one type of universal computing machine. As far 63 W v~u=j ‘ 1r" 64 THE PATTERN ON THE STONE as we know, no device built in the physical universe can have
any more computational power than a Turing machine. To
put it more precisely, any computation that can be performed
by. any physical computing device can be performed by any
universal computer, as long as the latter has sufﬁcient time
and memory. This is a remarkable statement, suggesting as it does that a universal computer with the proper programming
should be able to simulate the flmction of a human brain. LEVELS OF POWER IcIouuoaonooocuuIunon How can Turing’s hypothesis be true? Surely some other
kind of computer could be more powerful than the ones we
have described. For one thing, the computers we have
discussed so far have been binary, that is, they represent
everything in terms of 1 and 0. Wouldn't a computer be more
powerful if it could represent things in terms of a threestate
logic, like Yes. No, and Maybe? No, it would not. We know
that a threestate computer would be able to do no more than
a twostate computer, because you can simulate the one
using the other. With a twostate computer, you can dupli
cate any operation that can be performed on a threestate
computer, by encoding each of the three states as a pair of
bits—00 for Yes, say, and 11 for No, and 01 for Maybe. For
every possible function in threestate logic, there is a cone
spending function in twostate logic which operates on this
representation. This is not to say that threestate computers
might not have some practical advantage over twostate com
puters: for instance, they might use fewer wires and there
fore might be smaller. or cheaper to produce. But we can say
for certain that they would not be able to do anything new.
They would just be one more version of a universal machine. A similar argument holds for fourstate computers, or
ﬁvestate computers, or computers with any ﬁnite number of HOW UNIVERSAL ARE TURING MACHINES? states. But what about computers that compute with analog
signals—that is, signals with an inﬁnite number of possible
values? For example, imagine a computer that uses a contin—
uous range of voltages to indicate numbers. Instead of just
two or three or ﬁve possible messages, each signal could
carry an inﬁnite number of possible messages, correspond—
ing to the continuous range of voltages. For instance, an ana
log computer might represent a number betwaen D and 1 by a
voltage between zero and one volt. The fraction could be rep
resented to any level of precision, no matter the number of
decimal places, by using the exact corresponding voltage. Computers that represent quantiﬁes by such analog sig
nals do exist, and in fact the earliest computers worked this
way. They are called analog computers, to distinguish them
from the digital computers we have been discussing. which
have a discrete number of possible messages in each signal.
One might suppose that analog computers would be more
powerful, since they can represent a continuum of values,
whereas digital computers can represent data only as dis
crete numbers. However, this apparent advantage disappears
if we take a closer look. A true continuum is unrealizable in
the physical world. The problem with analog "computers is that their signals
can achieve only a limited degree of accuracy. Any type of
analog signal—electrical, mechanical, chemical—will con
tain a certain amount of noise; that is, at a certain level of
resolution, the signal will be essentially random. Any analog
signal is bound to be affected by numerous irrelevant and
unknown sources of noise: for example, an electrical signal
can be disturbed by the random motion of molecules inside a
wire, or by the magnetic ﬁeld created when a light is turned
on in the next room. In a very good electrical circuit. this
noise can be made very small—say, a millionth the size of
the signal itself—but it will always exist. While there are an
inﬁnite number of possible signal levels, only a ﬁnite num
ber of levels represent meaningful distinctions—that is, rep 66 THE PATTERN ON THE STONE resent information. If one part in a million in a signal is
noise, then there are only about a million meaningful dis
tinctions in the signal; therefore, information in the signal
can be represented by a digital signal that uses twenty bits
[220 = 1,048,578). Doubling the number of meaningful dis
tinctions in an analog computer would require making
everything twice as accurate, whereas in a digital computer
you could double the number of meaningful distinctions by
adding a single bit. The very best analog computers have
fewer than thirty bits of accuracy. Since digital computers
often represent numbers using thirtytwo or sixtyfour bits,
they can in practice generate a much larger number of mean
ingful distinctions than analog computers can. Some people might argue that while the noise of an ana—
log computer may not be meaningful, it is not necessarily
useless. One can certainly imagine computations that are
helped by the presence of noise. Later, for example, we will
describe computations requiring random numbers. But a dig
ital computer, too, can generate random noise if randomness
is called for in a computation. RANDOM NUMBERS couccooooouoonuuunu. How can a digital computer generate randomness? Can a
deterministic system like a computer produce a truly ran
dom sequence of numbers? In a formal sense, the answer is
No, since everything a digital computer does is determined
by its design and its inputs. But the same could be said of a
roulette wheel—after all, the hall's ﬁnal landing place is
determined by the physics of the ball (its mass, its velocity]
and the spinning wheel. If we knew the exact design of the
apparatus and the exact “inputs" governing the spin of the
wheel and the throw of the ball, we could predict the num
her on which the ball would land. The outcome appears ran HOW UNIVERSAL ARE TURING MACHINES? dom because it exhibits no obvious pattern and is difﬁcult,
in practice, to predict. ' Like the roulette wheel, a computer can produce a
sequence of numbers that is random in the same sense. In
fact, using a mathematical model, the computer could simu
late the physics of the roulette wheel and throw a simulated
ball at a slightly different angle each time in order to pro
duce each number in the sequence. Even if the angles at
which the computer throws the simulated ball follow a con
sistent pattern, the simulated dynamics of the wheel would
transform these tiny differences into what amounts to an
unpredictable sequence of numbers. Such a sequence of
numbers is called a pseudorandom sequence, because it only
appears random to an observer who does not know how it
was computed. The sequence produced by a pseudorandom
number generator can pass all normal statistical tests of ran
domness. A roulette wheel is an example of what physicists call a
chaotic system—a system in which a small change in the ini
tial conditions (the throw, the mass of the ball, the diameter
of the wheel, and so forth] can produce a large change in the
state to which the system evolves (the resulting number].
This notion of a chaotic system helps explain how a deter
ministic set of interactions can produce unpredictable
results. In a computer, there are simpler ways to produce a
pseudorandom sequence than simulating a roulette wheel,
but they are all conceptually similar to this model. Digital computers are predictable and unpredictable in
exactly the same senses as the rest of the physical world.
They follow deterministic laws, but these laws have compli
cated consequences that are extremely difﬁcult to predict. It
is often impractical to guess what computers are going to do
before they do it. As is true of physical systems, it does not
take much to make a computation complex. In computers,
chaotic systems—systems whose outcomes depend sensi
tively on the initial conditions—ere the norm. 6'." 68 THE PATTERN ON THE STONE COMPUTABILITY While a universal computer can compute anything that can
be computed by any other computing device, there are some
things that are just impossible to compute. Of course, it is
not possible to compute answers to vaguely deﬁned ques
tions, like "What is the meaning of life?” or questions for
which we lack data, like “What is the winning number in
tomorrow’s lottery?” But there are also ﬂawlessly deﬁned
computational problems that are impossible to solve. Such
problems are called noncomputable. I should warn you that noncomputable problems hardly
ever come up in practice. In fact, it is difficult to ﬁnd exam
ples of a welldeﬁned noncomputable problem that anybody
wants to compute. A rare example of a welldeﬁned, useful,
but noncomputable problem is the halting problem. Imagine
that I want to write a computer program that will examine
another computer program and determine whether or not
that program will eventually stop. If the program being
examined has no loops or recursive subroutine calls, it is
bound to ﬁnish eventually, but if it does have such con
structs the program may well go on forever. It turns out that
there is no algorithm for examining a program and determin—
ing whether or not it is fatally infected with an endless loop.
Moreover, it’s not that no one has yet discovered such an
algorithm; rather, no such algorithm is possible. The halting
problem is noncomputable. To understand why, imagine for a moment that I do have
such a program, called TestforHalt, and that it takes the pro—
gram to be tested as an input. [Treating a program as data may
seem strange, but it's perfectly possible, because a program,
just like anything else, can be represented as bits.) I could
insert the TestforHalt program as a subroutine in another pro—
gram, called Paradox. which will perform TestforIIalt on
Paradox itself. Imagine that I have written the Paradox pro HOW UNIVERSAL ARE TURING MACHINES? gram so that whatever TestforHalt determines, Paradox will
do the opposite. If Testfor—IIalt determines that Paradox is
eventually going to halt, then Paradox is programmed to go
into an inﬁnite loop. If Testfor—Halt determines that Paradox
is going to go on forever, then Paradox is programmed to
halt. Since Paradox contradicts Testfor—Halt, TestforHalt
doesn’t work on Paradox; therefore, it doesn’t work on all
programs. And therefore a program that computes the halting
function cannot exist. The halting problem, which was dreamed up by Alan
Turing, is chieﬂy important as an example of a noncom
putable problem, and most noncomputable problems that do
come up in practice are similar to or equivalent to it. But a
computer’s inability to solve the halting problem is not a
weakness of the computer, because the halting problem is
inherently unsolvable. There is no machine that can be con
structed that can solve the halting problem. And as far as we
know, there is nothing that can perform any other computa
tion that cannot be performed by a universal machine. The
class of problems that are computable by a digital computer
apparently includes every problem that is computable by
any kind of device. (This last statement is sometimes called
the Church thesis, after one of Turing’s contemporaries,
Alonzo Church. Mathematicians had been thinking about
computation and logic for centuries but—in one of the more
dazzling examples of synchrony in science—Turing, Church,
and another British mathematician named Emil Post all
independently invented the idea of universal computation at
roughly the same time. They had very different ways of
describing it, but they all published their results in 1937, set
ting the stage for the computer revolution soon to follow.) Another noncomputable function, closely related to the
halting problem. is the problem of deciding whether any
given mathematical statement is true or false. There is no
algorithm that can solve this problem, either—a conclusion
of Goedel’s incompleteness theorem, which was proved by 69 70 THE PATTERN ON THE STONE Kurt Goedel in 1931, just before Turing described the halting
problem. Goedel’s theorem came as a shock to many mathe
maticians, who until then had generally assumed that any
mathematical statement could be proved true or false.
Goedel’s theorem states that within any selfconsistent math
ematical system powerful enough to express arithmetic,
there exist statements that can neither be proved true nor
false. Mathematicians saw their job as preving or disproving
statements, and Goedel’s theorem proved that their “job " was
in certain instances impossible. Some mathematicians and philosophers have ascribed
almost mystical properties to Goedel’s incompleteness theo
rem. A few believe that the theorem proves that human intu—
ition somehow surpasses the power of a computer—that
human beings may be able to “intuit" truths that are impossible
for machines to prove or disprove. This is an emotionally
appealing argument, and it is sometimes seized upon by
philosophers who don’t like being compared to computers. But
the argument is fallacious. Whether or not people can success»
fully make intuitive leaps that cannot be made by computers,
Goedel’s incompleteness theorem provides no reason to
behave that there are mathematical statements that can be
proved by a mathematician but can’t be proved by a computer.
As far as we know, any theorem that can be proved by a human
being can also be proved by a computer. Humans cannot com
pute noncomputable problems any more than computers can. Although one is hard pressed to come up with specific
examples of noncomputable problems, one can easily prove
that most of the possible mathematical functions are non
computable. This is because any program can be speciﬁed in
a ﬁnite number of bits, whereas specifying a flmction usually
requires an infinite number of bits, so there are a lot more
functions than programs. Consider the kind of mathematical
function that converts one number into another—the cosine,
say, or the logarithm. Mathematicians can deﬁne all kinds of
bizarre functions of this type: for example the function that HOW UNIVERSAL ARE TURING MACHINES? converts every decimal number into the sum of its digits. As
far as I know, this function is a useless one, but a mathemati
cian would regard it as a legitimate function simply because
it converts every number into exactly one other number. It
can be proved mathematically that there are inﬁnitely more
functions than programs. Therefore, for most functions there
is no corresponding program that can compute them. The
actual counting involves all kinds of difﬁculties (including
counting inﬁnite things and distinguishing between various
degrees of infinity”, but the conclusion is correct: statisti
cally speaking, most mathematical functions are noncom~
putable. Fortunately, almost all these noncomputable func tions are useless, and virtually all the functions we might
want to compute are computable. QUANTUM COMPUTING on  .looIo  As noted earlier, the pseudorandom number sequences pro
duced by computers look random, but there is an underlying
algorithm that generates them. If you know how a sequence
is generated, it is necessarily predictable and not random. If
ever we needed an inherently unpredictable randomnumber
sequence, we would have to augment our universal machine
with a nondeterministic device for generating randomness.
One might imagine such a randomnessgenerating device
as being a kind of electronic roulette wheel, but, as we have
seen, such a device is not truly random because of the laws
of physics. The only way we know how to achieve genuinely
unpredictable effects is to rely on quantum mechanics.
Unlike the classical physics of the roulette wheel, in which
effects are determined by causes, quantum mechanics pro
duces effects that are purely probabilistic. There is no way of
predicting, for example, when a given uranium atom will
decay into lead. Therefore one could use a Geiger counte...
View
Full Document
 Spring '10
 ANDREDEHON

Click to edit the document details