Name:
Perm #:
Final Exam - Example
CS 165A Artificial Intelligence
First: Please fill out your name and perm # (if you have one) on the top of this page.
The exam is closed book.
Write your answers in
CS 165A Artificial Intelligence, Winter 2011 Assignment #1 Artificial Intelligence Due Thursday, Jan 20 before the class
Notes:
Be sure to re-read the Policy on Academic Integrity on the course syll
measures the probability that the cost is less than or equal to any given
amount that is, it integrates the original distribution. If the cumulative
distribution for S\ is always to the right of the c
make a trip to a new destination, the taxi might take a while to consult
its map and plan the best route. But the next time a similar trip is
requested, the SPEEDUP LEARNING planning process should be
public policy involves both millions of dollars and life and death. For
example, in deciding what levels of a carcinogenic substance to allow
into the environment, policy makers must weigh the prevent
17.1, find all the threshold values for the cost of a step, such that the
optimal policy changes when the threshold is crossed. 17.5 Prove that
the calculations in the prediction and estimation phases
INFERENCES EXPLAINING AWAY MIXED INFERENCES In both
of these problems, the reasoning is diagnostic. But belief networks are
not limited to diagnostic reasoning and in fact can make four distinct
kinds
to generate a large number of concrete models of the domain that are
consistent with the network distribution. They give an approximation of
the exact evaluation. In the general case, exact inference
John calling. Over the course of 1000 days, we expect one burglary, for
which John is very likely to call. However, John also calls with
probability 0.05 when there actually is no alarmabout 50 times
Connected Belief Networks 455 CUTSET BOUNDED CUTSET
CONDITIONING polytrees. Each simple network has one or more
variables instantiated to a definite value. P(X\E) is computed as a
weighted average ove
within a wellunderstood reasoning system. Work on such a language is
one of the most important topics in knowledge representation research,
and some progress has been made recently (Bacchus, 1990; Bac
closely related to the general computational technique of dynamic
programming. Slightly more complex methods are needed to handle
the case where the length of the action sequence is unbounded. We
brie
another rule, namely, that M generally is effective against D. Given this
rule, and the student's prior knowledge, the student can now explain why
the expert prescribes M in this particular case. We c
operationality of each subgoal in the rule. A subgoal is operational,
roughly speaking, if it is "easy" to solve. For example, the subgoal
Primitive(z) is easy to solve, requiring at most two steps, w
candidates for the performance element. Decision-tree learning
algorithms that provide real-valued output can also be used (see for
MODEL TREES example Quinlan's (1993) model trees), but cannot use
th
Nationality^, n) A Language(x, I) => Language(y, I) (2 1 .4) (Literal
translation: "If x and y have a common nationality n and x speaks
language /, then y also speaks it.") It is not difficult to show
proof. After such an experience, we would like the program to solve the
same problem much more quickly the next time. MEMOIZATION The
technique of memoization has long been used in computer science to
visited. That is, the most important aspect of an implicit representation is
not that it takes up less space, but that it allows for inductive
generalization over input states. For this reason, method
whose left-hand side consists of the leaves of the proof tree, and whose
right-hand side is the variabilized goal (after applying the necessary
bindings from the generalized proof). 4. Drop any condit
But the same generalization would be forthcoming from a traveller
entirely ignorant of colonial history. The relevant prior knowledge in this
case is that, within any given country, most people tend t
neural network would in some cases have to be exponentially larger in
order to represent the same input/output mapping as a belief network
(else we would be able to solve hard problems in polynomial t
identify the necessary conditions for those same steps to apply to another
case. We will use for our reasoning system the simple backwardchaining theorem prover described in Chapter 9. The proof tree
knowledge as well as with the new observations, the effective hypothesis
space size is reduced to include only those theories that are consistent
with what is already known. 2. For any given set of ob
general rules from individual observations. As an example, consider the
problem of differentiating and simplifying algebraic expressions
(Exercise 10.4). If we differentiate an expression such as X2 w
Boolean nodes are replaced by a meganode that takes on four possible
values: TT, TF, FT, and FF. The meganode has only one parent, the
Boolean variable Cloudy, so there are two conditioning cases. Onc
attribute is defined in such a way that, all other things being equal,
higher values of the attribute correspond to higher utilities. For example,
if we choose as an attribute in the airport problem A
previously considered to be the entire agent: it takes in percepts and
decides on actions. The learning element takes some knowledge about
the learning element and some feedback on how the agent is do
because the row must sum to 1. In Figure 15.9, we dropped one of the
two columns, but here we show all four.) The tricky part about clustering
is choosing the right meganodes. There are several ways t
and the first round is done. To estimate P(WetGrass\Cloudy) (or in
general P(X\E), we repeat the process many times, and then compute
the ratio of the number of runs where WetGrass and Cloudy are true
is by no means the general case. Most human learning takes place in the
context of a good deal of background knowledge. Some psychologists
and linguists claim that even newborn babies exhibit knowledg