{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

PH100 Lecture Notes

Objection 1b minds are not semantic engines because

Info iconThis preview shows pages 5–7. Sign up to view the full content.

View Full Document Right Arrow Icon
Objection 1/b: Minds are not semantic engines because understanding requires original intentionality and semantic engines only have derived intentionality Response: What does it mean to have original intentionality? Objection 1/a: Minds are not semantic engines because understanding requires caring and semantic engines do not care Response: Why can't we program machines to care? 2. Objection 2: Poor Substitute Strategy: Minds cannot be semantic engines/machines because semantic engines/machines are not even capable of acting as if they understand John Searle: “Is the Brain's Mind a Computer Program?” Searle's offers a refutation of strong AI Strong AI: Thinking is independent of the brain; brains think but other things can think too Weak AI: Machine can simulate thinking. If Searle is right, then strong AI is false computers cannot think even if they pass the Turing test, we should not conclude that they are intelligent or that they think at best, computers can simulate thinking If Searle is right, then the computational response is also false minds are not computers
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
minds can think; computers cannot Chinese Room Argument “I satisfy the Turing test for understanding Chinese. All the same, I am totally ignorant of Chinese” (26). Analogy between the Chinese Room and a computer If the person inside the Chinese Room does not understand Chinese, then computers cannot understand either. “The point of the thought experiment is this […] Digital computers merely manipulate formal symbols according to the rule book” (26). 1. Computer are formal systems (syntactic) 2. Human minds have mental contents (semantics) 3. Syntax by itself is neither constitutive of nor sufficient for semantics 4. Programs are neither constitutive of nor sufficient for semantics. 5. Strong AI is false “You can't get semantically loaded though contents from formal computations alone” Common responses to the Chinese Room Argument 1. Understanding without knowledge One can understand X without knowing that one understands X. “It is..possible to understand something without knowing that one understands it” (29). ex) language understanding at an early developmental stage 2. Understanding of a subsystem The person in the Chinese Room does not understand Chinese, bun an unconscious subsystem of him/her does. 3. Systems reply The person in the Chinese Room does not understand Chinese, but the whole room (i.e., person + instructions + notes + whatever else is in the room) does. “You are like a single neuron in the brain, and just as such a single neuron by itself cannot understand but only contributes to the understanding of the whole system, you don't understand, but the whole system does” (29).
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}