Searle_Functionalism_slides-1

Searle_Functionalism_slides-1 - Weak AI: The programmed...

Info iconThis preview shows pages 1–9. Sign up to view the full content.

View Full Document Right Arrow Icon
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Weak AI : The programmed computer simulates the mind. It is a research tool, for testing psychological explanations. It can perform tasks that require thought in humans. Strong AI : The programmed computer is a mind. The computer really understands and is conscious .
Background image of page 2
• Searle focuses on ‘intentionality’ in this paper. But conscious intentionality is probably an even bigger problem for computer programs. • Intentionality = meaning, significance, or “aboutness”.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
• Random marks on a piece of paper have no meaning. They are not ‘about’ anything. But the words ‘the moon has no atmosphere’ are about the moon. They have a meaning. They somehow connect with an external state of affairs. But the real source of the intentionality is the mind that understands the sentence, and thinks the thought (proposition). Without that, the sentence is just a set of marks on paper. • Searle doubts that computer programs
Background image of page 4
• Roger Shank’s program can answer questions about restaurants. It has a “representation” of general knowledge about restaurants. Does it understand the story, the questions, and its own responses? • (Similar to programs like Elbot, iGod, today.)
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Searle is in a room with a ‘script’, a ‘story’, some ‘questions’ and a ‘program’. script general information that the virtual Chinese person has (e.g. people won’t usually eat a badly burned hamburger). story e.g. the waiter brings a man a burned hamburger and he storms out without paying questions e.g. Did the man eat the hamburger? responses e.g. No, he didn’t eat the hamburger Chinese Room Thought Experiment
Background image of page 6
program Some very complicated rules, based on dissecting the sentences, seeing formal (structural) relationships, reordering words, substituting terms, etc. • E.g. To any question of the form: “Do you like [X]?” reply “Yes, I like [X]” if X is a member of the set {chocolate, money, fast cars, philosophy, …} but reply “No, I don’t like [X]” if X is a member of {the smell of a wet dog, watching Grey’s Anatomy, …} and reply “I’m not sure, I don’t know what [X] is” if X is on neither list.
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
questions are convincing, as good as those of a real Chinese speaker. Does the person understand Chinese? • No.
Background image of page 8
Image of page 9
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 11/30/2010 for the course PHIL 1101 taught by Professor Johns during the Fall '10 term at Langara.

Page1 / 29

Searle_Functionalism_slides-1 - Weak AI: The programmed...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online