This preview shows pages 1–4. Sign up to view the full content.
CS188
Intro. to AI
Fall, 2000
R. Wilensky
Final Examination
•
This is an openbook, opennotes exam.
•
Write your name, etc., in the space below; answer all questions in the space provided.
(Space is
provided for your name on the top of each page as well.)
•
You have 3 hours to work on the exam.
•
There are 125 points total.
•
Questions vary in difficulty;
do what you know first
.
•
Good luck!
NAME:
SID:
TA:
(
Space below for official use only.
)
Problem 1:
(20)
Problem 2:
(25)
Problem 3:
(20)
Problem 4:
(30)
Problem 5:
(20)
Problem 6:
(10)
Total:
(125)
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document Name, SID:
2
Problem 1
(20 points, 2 points each) For each statement below, say if it is true or false, and give a one sentence
explanation of your answer.
(a) The sentence “
2200
x,y
Parent
(
x
,
y
)
→
Child
(
y
,
x
)
” is satisfiable but not logically valid.
True:
True if Parent and Child are mapped to inverse relations, which of course, they may
not be.
(b) Any linearly separable data set can be learned by some single layer perceptron.
True:
We proved that perceptrons learn exactly the linearly separable sets.
(c) Decision tree learning algorithms may be subject to overtraining, but not neural network learning
algorithms.
False:
We meant
overfitting
, which can (and does) occur in NNs as well.
(The typo didn ’t
seem to bother many of you, and overtraining is not a bad term for what happens
anyway.)
(d) Given the expression (which we corrected during the exam, to include r as an existential variable to
and make the ! a 1)
5
g,a,r,p,d
Ind
(
g,Giving
)
∧
Agent
(
a,g
)
∧
Recipient
(
r,g
)
∧
Donor
(
d,g
)
∧
Theme
(
p,g
)
we can derive the following expression, assuming that all the constants do not otherwise appear in
any other formula:
Ind(G1,Giving)
∧
Agent(A1,G1)
∧
Recipient (R1,G1)
∧
Donor (D1,G1 )
∧
Theme
(
P1,G1
)
True: Via Existential instantiation (Skolemization)
(e) Temporal difference learning can be used for deterministic MDPs, but not for nondeterministic
tasks.
Name, SID:
3
False:
TDGammon is a case in point.
(f) If a heuristic always returns the same value no matter what the state, it cannot be admissible.
False.
It can still be admissible, just not terribly useful.
(g) The Markov assumption enables the Viterbi algorithm to be computationally tractable.
True.
It allows us to limit the amount of state we need to keep track of.
(h) A set of propositions in a “production system ”, interpreted using a set of conflictresolution
strategies, has the same semantics as they would if interpreted as a knowledge base of logic
formulas.
False.
In logic, a sentence like “A x P(x)” means that P is true for all x; in a production
system, it might mean that P is true for all x except where there is more specific information.
(i) Backpropagation is equivalent to using gradient descent to find a local minimum of an error function
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview. Sign up
to
access the rest of the document.
This note was uploaded on 05/17/2009 for the course CS 188 taught by Professor Staff during the Spring '08 term at University of California, Berkeley.
 Spring '08
 Staff
 Computer Science

Click to edit the document details