This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 18 Minds and machines" The various issues and puzzles that make up the traditional mind—body
problem are wholly linguistic and logical in character: whatever few
empirical ‘facts’ there may be in this area support one view as much as
another. I do not hope to establish this contention in this paper, but I
hope to do something toward rendering it more plausible. Speciﬁcally,
I shall try to show that all of the issues arise in connection with any
computing system capable of answering questions about its own
structure, and have thus nothing to do with the unique nature (if it is
unique) of human subjective experience. * To illustrate the sort of thing that is meant: one kind of puzzle that is
sometimes discussed in connection with the ‘mind—body problem’ is
the puzzle of privacy. The question ‘How do I know I have a pain?’ is a
deviantf (‘logically odd’) question. The question ‘How do I know
Smith has a pain?’ is not all at deviant. The difference can also be
mirrored in impersonal questions: ‘How does anyone ever know he
himself has a pain?’ is deviant; ‘How does anyone ever know that some—
one else is in pain?’ is nondeviant. I shall show that the difference in
status between the last two questions is mirrored in the case of machines:
if T is a Turing machine (see below), «the question ‘How does T
ascertain that it is in state A?’ is, as we shall see, ‘logically odd’ with a
vengeance; but if T is capable of investigating its neighbor machine T'
(say, T has electronic ‘sense—organs’ which ‘scan’ T’), the question
‘How does T ascertain that T’ is in state A?’ is not at all odd. Another question connected with the ‘mind—body problem’ is the
question whether or not it is ever permissible to identify mental events
and physical events. Of course, I do not claim that this question arises
for Turing machines, but I do claim that it is possible to construct a
logical analogue for this question that does arise, and that all of the
question of ‘mind—body identity’ can be mirrored in terms of the analogue. * First published in Sidney Hook (ed.) Dimensions of Mind (New York, 1960) Reprinted by permission of New York University Press.
1 By a ‘deviant’ utterance is here meant one that deviates from a semantical regularity (in the appropriate natural language). The term is taken from Ziff, 1960. 362 MINDS AND MACHINES To obtain such an analogue, let us identify a scientiﬁc theory with a
‘partiallyinterpreted calculus’ in the sense of Carnapt. Then we can
perfectly well imagine a Turing machine which generates theories, tests
them (assuming that it is possible to ‘mechanize’ inductive logic to some
degree), and ‘accepts’ theories which satisfy certain criteria (e.g. pre
dictive success). In particular, if the machine has electronic ‘sense
organs’ which enable it to ‘scan’ itself while it is in operation, it may
formulate theories concerning its own structure and Subject them to
test. Suppose the machine is in a given state (say, ‘state A’) when, and
only when, flipflop 36 is on. Then this statement: ‘I am in state A
when, and only when, ﬂip—ﬂop 36 is on’, may be one of the theoretical
principles concerning its own structure accepted by the machine. Here
‘I am in state A’ is, of course, ‘observation language’ for the machine,
while ‘ﬂip—flop 36 is on’ is a ‘theoretical expression’ which is partially
interpreted in terms of ‘observables’ (if the machine’s ‘sense organs
report by printing symbols on the machine’s input tape, the ‘observ
ables’ in terms of which the machine would give a partial operational
deﬁnition of ‘ﬂip—flop 36 being on’ would be of the form ‘symbol #
so—and—so appearing on the input tape’). Now all of the usual considera
tions for and against mind—body identiﬁcation can be paralleled by
considerations for and against saying that state A is in fact identical with
ﬂipﬂop 36 being on. Corresponding to Occamist arguments for ‘identify’ in the one case
are Occamist arguments for identity in the other. And the usual argu—
ment for dualism in the mind—body case can be paralleled in the other
as follows: for the machine, ‘state A’ is directly observable; on the other
hand, ‘ﬂip—flops’ are something it knows about only via highly—sophisti—
cated inferences e How could two things so different possibly be the
same? This last argument can be put into a form which makes it appear
somewhat stronger. The proposition: (1) I am in state A if, and only if, flip—ﬂop 36 is on, is clearly a ‘synthetic’ proposition for the machine. For instance, the
machine might be in state A and its sense organs might report that ﬂip—
ﬂop 36 was not on. In such a case the machine would have to make a
methodological ‘choice’ — namely, to give up (I) or to conclude that it
had made an ‘observational error’ (just as a human scientist would be
confronted with similar methodological choices in studying his own th. ~Carnap 1953 and 1956. This model of a scientiﬁc theory is too oversimpliﬁed
to be of much general utility, in my opinion: however, the oversimpliﬁcztions do not
affect the present argument. ’ 363 MIND, LANGUAGE AND REALITY psychophysical correlations). And just as philosophers have argued from
the synthetic nature of the proposition: (2) I am in pain if, and only if, my Cﬁbers are stimulated, to the conclusion that the properties (or ‘states’ or ‘events’) being in
pain, and having Cﬁbers stimulated, cannot possibly be the same
(otherwise (2) would be analytic, or so the argument runs); so one should
be able to conclude from the fact that (I) is synthetic that the two
properties (or ‘states’ or ‘even.ts’) — being in state A and having ﬂip
ﬂop 36 on — cannot possibly be the same! It is instructive to note that the traditional argument for dualism is
not at all a conclusion from ‘the raw data of direct experience’ (as is
shown by the fact that it applies just as well to nonsentient machines),
but a highly complicated bit of reasoning which depends on (a) the
reiﬁcation of universalsf (e.g. ‘properties’, ‘states’, ‘events’); and on
(b) a sharp analytic—synthetic distinction. I may be accused of advocating a ‘mechanistic’ world—view in pressing
the present analogy. If this means that I am supposed to hold that
machines think,I on the one hand, or that human beings are machines,
on the other, the charge is false. If there is some version of mechanism
sophisticated enough to avoid these errors, very likely the considerations
in this paper support it.§ 1. Turing Machines The present paper will require the notion of a Turing machine” which
will now be explained. Brieﬂy, a Turing machine is a device with aTinite number of internal
conﬁgurations, each of which involves the machine’s being in one of a
ﬁnite number of statesﬂl and the machine’s scanning a tape on which
certain symbols appear. 1 This point was made by Quine in Quine, 1957. I Cf. Ziff’s paper (1959) and the reply (1959) by Smart. Ziff has informed me that
by a ‘robot’ he did not have in mind a ‘learning machine’ of the kind envisaged by
Smart, and he would agree that the considerations brought forward in his paper would
not necessarily apply to such a machine (if it can properly be classed as a ‘machine’ at
all). On the question of whether ‘this machine thinks (feels, etc.) ’ is deviant or not, it is
necessary to keep in mind both the point raised by Ziff (that the important question is
not Whether or not the utterance is deviant, but whether or not it is deviant for non
trivial reasons), and also the ‘diachronic—synchronic’ distinction discussed in section 5 of the present paper. . § In particular, I am sympathetic with the general standpoint taken by Smart 10
(1959b) and (1959c). However, see the linguistic considerations in section 5. II For further details, cf. Davis, 1958 and Kleene, 1952. 1] This terminology is taken from Kleenc, 1952, and differs from that of Davis and Turing. 364 MINDS AND MACHINES The machine’s tape is divided into separate squares, thus lJ l ill on each of which a symbol (from a ﬁxed ﬁnite alphabet) may be printed
Also the machine has a ‘scanner’ which ‘scans’ one square of the ta e
at a time. Finally, the machine has a printing mechanism which ma (1:1)
erase the symbol which appears on the square being scanned and, (b)
print some other symbol (from the machine’s alphabet) on that square AnyTurmg machine is completely described by a machine table
which Is constructed as follows: the rows of the table correspond td
letters of the alphabet (including the ‘null’ letter, i.e. blank space) while
the columns correspond to states A, B, C, etc. In each square’ there
appears an ‘instruction’, e.g. ‘s5L A’, ‘s7C B’, ‘33R C’. These instruc—
tions are read as follows: ‘s5L A’ means ‘print the symbol :5 on the
square you are now scanning (after erasing whatever symbol it now
contains), and proceed to scan the square immediately to the left of the one you have just been scanning; also, shift into state A.’ The
other instructions are similarly interpreted (‘R’ m immediately to the right’, while ‘C’ means eans ‘scan the square
, . .
center’, i.e. continue scanning the same square). The following is a sample machine table: A B C D (51) I isA leB 53LD x1 CD
(32) + leB 52C D s2LD S2CD
blank (53) space s3 CD s3RC 53LD 33 CD The machine described by this table is intended to function as ,fOHOVYS’. the machine is started in state A. On the tape there appears a
sum (in unary notion) to be ‘worked out’, e.g. ‘11 + 111." The machine is initially scanning the ﬁrst ‘ 1 ’. The machine proceeds
to work out’ the sum (essentially by replacing the plus sign by a I and
then gOing back and erasing the ﬁrst 1). Thus if the ‘input’ was I IiI +
‘11111 the machine would ‘print out’IIIIIIIII, and then go into the rest state’ (state D). A ‘machine table’ describes a machine if the machine has internal
states corresponding to the columns of the table, and if it ‘obeys’ the
instruction in. the table in the following sense: when it is scanning a
square on which a symbol 3, appears and it is in, say, state B that it
carries. out the ‘instruction’ in the appropriate row and column of the
table (in this case, column B and row 3,). Any machine that is described
by a machine table of the sort just exempliﬁed is a Turing machine. 365 MIND, LANGUAGE AND REALITY The notion of a Turing machine is also subject to generalizationj‘ in
various ways — for example, one may suppose that the machine has a
second tape (an ‘input tape’) on which additional information may be
printed by an operator in the course of a computation. In the sequel we
shall make use of this generalization (with electronic ‘sense organs’
taking the place of the ‘operator’). It should be remarked that Turing machines are able in principle
to do anything that any computing machine (of whichever kind) can
dol It has sometimes been cont‘etided (e.g. by Nagel and Newman in
their book Godel’s Proof) that ‘the theorem [i.e. Godel’s theorem] does
indicate that the structure and power of the human mind are far more
complex and subtle than any nonliving machine yet envisaged’ (p. 10),
and hence that a Turing machine cannot serve as a model for the
human mind, but this is simply a mistake. Let T be a Turing machine which ‘represents’ me in the sense that T
can prove just the mathematical statements I can prove. iT hen the argu—
ment (Nagel and Newman give no argument, but I assume they must
have this one in mind) Ts that by using Godel’s technique I can discover
a proposition that T cannot prove, and moreover I can prove this
proposition. This refutes the assumption that T ‘represents’ me, hence
I am not a Turing machine. The fallacy is a misapplication of Godel’s
theorem, pure and simple. Given an arbitrary machine T, all I can do is
ﬁnd a proposition U such that I can prove: (3) If T is consistent, Uis true, where U is undecidable by T if T is in fact consistent. However, T can
perfectly well prove (3) too! And the statement U, which T cannot
prove (assuming consistency), I cannot prove either (unless I can prove
that T is consistent, which is unlikely if T is very complicated)! 2. Privacy Let us suppose that a Turing machine T is constructed to do the
following. A number, say ‘3000’ is printed on T’s tape and T is started
in T’s ‘initial state’. Thereupon T computes the 3oooth (or whatever
the given number was) digit in the decimal expansion of 7r, prints this
digit on its tape, and goes into the ‘rest state’ (i.e. turns itself off). 1" This generalization is made in Davis, 1958, where it is employed in deﬁning relative recursiveness.
I This statement is a form of Church’s thesis (that rccursivcncss equals effective computability). 366 MINDS AND MACHINES Clearly the question ‘How does T “ascertain” [or “ compute ” or “work
’) o n n 4 ’ out ] the 3oooth digit in the deCimal expansion of 77?’ is a sensible
question. And the answer might well be a complicated one. In
fact, ‘ an answer would probably involve three distinguishable
constituents: ('1). A description of the sequence of states through which T passed in
arrivmg at the answer, and of the appearance of the tape at each stage in
the computation. (ii) A description of the rules under which T operated (these are given
by the ‘machine table’ for T). (iii) An explanation of the rationale of the entire procedure. ‘ Now let us suppose that someone voices the following objection: In order to perform the computation just described, T must pass through states A, B, C, etc. But how can T ascertain that it is in states
A, B, C, etc?’ .It is clear that this is a silly objection. But what makes it silly? For one
thing, the ‘logical description’ (machine table) of the machine describes
the state only in terms of their relations to each other and to what
appears on the tape. The ‘physical realization’ of the machine is
immaterial, so long as there are distinct states A, B, C, etc., and they
succeed each other as speciﬁed in the machine table. Thus one can
answer a question such as ‘How does T ascertain that X?’ (or ‘compute
X ’, etc.) only in the sense of describing the sequence of states through
which T must pass in ascertaining that X (computing, X etc.), the rules
obeyed, etc. But there is no ‘sequence of states’ through which T must
pass to be in a single state! IIndeed, suppose there were — suppose T could not be in state A
Without ﬁrst ascertaining that it was in state A (by ﬁrst passing through
a sequence of other states). Clearly a Vicious regress would be involved.
And one I‘breaks’ the regress simply by noting that the machine,
in ascertaining the 3oooth digit in 7r, passes through its states — but it
need not in any signiﬁcant sense ‘ascertain’ that it is passing through
them. Note the analogy to a fallacy in traditional epistemology: the fallacy
of supposing that to know thatp (wherep is any proposition) one must
ﬁrst know that 91, (12, etc. (where ql, (12, etc., are appropriate other pro
posmons). This leads either to an ‘inﬁnite regress’ or to the dubious
move of inventing a special class of ‘protocol’ propositions. The resolution of the fallacy is also analogous to the machine case.
Suppose that on the basis of sense experiences E1, E2, etc., Iknow that
there is a chair in the room. It does not follow that I verbalized (or even
could have verbalized) E1, E2, etc., nor that I remember E1, E2, etc., nor 367 MIND, LANGUAGE AND REALITY even that I ‘mentally classiﬁed’ (‘attended to’, etc.) sense experiences
E1, E2, etc., when I had them. In short, it is necessary to have sense
experiences, but not to know (or even notice) what sense experiences one
is having, in order to have certain kinds of knowledge. Let us modify our case, however, by supposing that whenever the
machine is in one particular state (say, ‘state A’) it prints the words ‘I
am in state A’. Then someone might grant that the machine does not in
general ascertain what state it is in, but might say in the case of state A (after the machine printed ‘I am in state A’): ‘The machine ascertained that it was in state A’. ' , Let us study this case a little more closely. First of all, we want to
suppose that when it is in state A the machine prints ‘1 am in state A’
without ﬁrst passing through any other states. That is, in every row of
the column of the table headed ‘state A’ there appears the instruction:
print‘r ‘I am in state A’. Secondly, by way of comparison, let us
consider a human being, Jones, who says ‘I am in pain’ (or ‘Ouchl’,
or ‘Something hurts’) whenever he is in pain. To make the .comparison
as close as possible, we will have to suppose that Jones’ linguistic
conditioning is such that he? simply says ‘I am in pain’ ‘without thinking ’,
i.e. without passing through any introspectible mental states other than
the pain itself. In Wittgenstein’s terminology, Jones simply evince: his
pain by saying ‘I am in pain’ — he does not ﬁrst reﬂect on it (or heed it,
or note it, etc.) and then consciously describe it. (Note that this simple
possibility of uttering the ‘proposition’, ‘1 am in pain’ without first
performing any mental ‘act of judgement’ was overlooked by traditional
epistemologists from Hume to Russell!) Now we may consider the
parallel question ‘Does the machine “ascertain” Lhat it is in state A?’
and ‘Does Jones “know” that he is in pain?’ and their consequences. Philosophers interested in semantical questions have, as one might
expect, paid a good deal of attention to the verb ‘know’. Traditionally,
three elements have been distinguished: (i) ‘X know that p’ implies
that p is true (we may call this the truth element); (2) ‘X knows that 13’
implies that X believes that p (philosophers have quarrelled about the
word, some contending that it should be ‘X is conﬁdent that p,’ or ‘X is
in a position to assert that p’; I shall call this element the conﬁdence
element); (3) ‘X knows that p’ implies that X has evidence that 1) (here
I think the word ‘evidence’ is deﬁnitely wrong,I but it will not matter
for present purposes; I shall call this the evidential element). Moreover, T Here it is necessary to suppose that the entire sentence ‘ I am in state A. ’ counts as a single symbol in the machine’s alphabet.
I For example, I know that the sun is 93 million miles from the earth, but I have no evidence that this is so. In fact, I do not even remember where I learned this. 368 it is part of the meaning of
literally evidence for itself:
be different things. In View of such anal ‘ theword ‘evidence’ that nothing can be
if X is ev1dence for Y, then X and Y must follows: it would be cl
pain; but either Jones
a pain. Against these Jones is in a position to
nﬁdence elements are both present; it is the evidential element that occasions the difﬁculty ) I . . .
rath do nolt1 WlSh to argue this question here ;'l‘ the present concern is
er Wit the Similarities between our two questions For example
I g $ ’
0116 1111 ht declde to accept (as 11011 deHaIlt lOglcally In OI dCI 11011
SCHCOIIU adlCtOI y , etc.) tile tWO Statelllellts. (a) The machine ascertained that it was in state A
(b) Jones knew that he had a pain, , or one might reject both. If one rejects
alternative formulations which are certain
(for (a)) ‘The machine was in state A an
in state A’”, (for (b)) ‘Jones was in ‘  ' u 1 '
am in pain (or, ‘Jones was in pai
am in pain” ’). On the ’
questions pthe‘rfljand, if one accepts (a) and (b), then one must face the
al) 020 did the machine ascertain that it was in state A” and (b1) ‘How did Jones know that he had a
And if one regards these que ' (a) and (b), then one can ﬁnd
ly semantically acceptable: e.g.
. (1 this caused it to print: “I am
pain, and this caused him to say “I
n, and he evinced this by saying “I want to ﬁnd that this sentence is deviant. 369 MIND, LANGUAGE AND REALITY will be degenerate answers — e.g. ‘By being in state A ’ and ‘By having
the pain.’ At this point it is, I believe, very clear that the difﬁculty has in both
cases the same cause. Namely, the difﬁculty is occasioned by the fact
that the ‘ve...
View
Full Document
 Fall '09
 Guy

Click to edit the document details