John Searle - Sarah Otto Marxhausen Taylor Philosophy of...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Sarah “Otto” Marxhausen 5/12/02 Taylor Philosophy of Mind Meaning, Use, and Learning about Red Rabbits: Tentative Stabs at Searle’s Chinese Room John Searle’s Chinese room example is intended to illustrate the unprovability of Artificial Intelligence (AI). In the example, someone – let’s call him John – is placed into a room and given a rulebook full of Chinese characters and sentences, which, as a monolingual English speaker, he does not understand. Written Chinese characters, phrases and questions are put to John from Chinese speakers outside the room, and his rulebook tells him which written responses he is to make. Eventually, as he memorizes his book of responses, he does this quickly enough to convincingly represent himself to those outside the room as a fluent Chinese speaker. But, says Searle, the fact that John obviously doesn’t understand Chinese means that a similar situation can occur with AI: an AI system might be able to convincingly engage in conversation with us, but we can never tell if it really understands anything it says. I have argued in an earlier paper that it is possible that John does understand, in some sense, what he’s doing. Here, I plan to examine this possibility a little more closely. The argument comes down to, largely, what Searle means when he talks about what it is to “understand” a language, or to understand in general. A note: because this discussion will involve comparisons between human beings, and points about how they understand each other, it is possible for the problem of other minds to occur, especially given the nature of Searle’s example. One could say that the Chinese room puzzle takes
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
on the more general challenge of proving the existence of other minds. Every person becomes their own Chinese room, and we cannot have direct access to their minds. This is not Searle’s aim, however; he wants to preserve our empathic link with other people, our assumption that because they act and are structured like us, that they think and understand – otherwise, we cannot identify with John in Searle’s puzzle. So let us assume that the existence of other minds is a given, and move on to how those other minds interact with each other, and how they demonstrate understanding. There are two different ways in which John can be considered to have understanding of the language he is participating in. W. O. Quine’s work on radical translation becomes relevant here. His famous “Gavagai” example illustrates the indeterminacy of translation and therefore of demonstrating understanding. Suppose a sociologist is out walking with a member of a society whose language he does not understand and is trying to learn. A rabbit runs across the path, and the native points at it and says, “gavagai.” The sociologist assumes that “gavagai” means rabbit, and as he learns the language, successfully uses “gavagai” in conversations to the satisfaction of native speakers. However, his definition may be
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 8

John Searle - Sarah Otto Marxhausen Taylor Philosophy of...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online