pacman.py (original)
#
#
#
#
#
#
#
#
#
#
#
#
pacman.py
-Licensing Information: You are free to use or extend these projects for
educational purposes provided that (1) you do not distribute or publish
solutions, (2) you retain this notice, and (3) you prov
Past Exam Questions: Search
1
Search and Heuristics
Imagine a car-like agent wishes to exit a maze like the one shown below:
The agent is directional and at all times faces some direction d (N, S, E, W ). With a single action, the agent
can either move fo
CS 188
Spring 2010
Final Exam
Solutions
Introduction to
Articial Intelligence
Q1. [14 pts] Search
For the following questions, please choose the best answer (only one answer per question). Assume a nite search
space.
(a) [2 pts] Depth-rst search can be ma
#
#
#
#
#
#
#
#
#
#
#
#
pacman.py
-Licensing Information: You are free to use or extend these projects for
educational purposes provided that (1) you do not distribute or publish
solutions, (2) you retain this notice, and (3) you provide clear
attribution
CS 188
Fall 2011
Introduction to
Articial Intelligence
Midterm Exam
INSTRUCTIONS
You have 3 hours.
The exam is closed book, closed notes except a one-page crib sheet.
Please use non-programmable calculators only.
Mark your answers ON THE EXAM ITSELF.
CS 188
Spring 2012
Introduction to
Articial Intelligence
Midterm II
You have 2 hours.
The exam is closed book, closed notes except a one-page crib sheet.
Please use non-programmable calculators only.
Mark your answers ON THE EXAM ITSELF. If you are no
Midterm II
Solutions
Introduction to
Articial Intelligence
CS 188
Spring 2012
Q1. [18 pts] Markov Decision Processes
(a) [4 pts] Write out the equations to be used to compute Q from R, T, Vi1 , and to compute Vi from R, T, Q , .
i
i
T (s, a, s ) R(s, a, s
Midterm II
Solutions
Introduction to
Articial Intelligence
CS 188
Spring 2012
Q1. [18 pts] Markov Decision Processes
(a) [4 pts] Write out the equations to be used to compute Q from R, T, Vi 1 , and to compute Vi from R, T, Q , .
i
i
T (s, a, s ) R(s, a,
CS 188
Spring 2011
Introduction to
Articial Intelligence
Practice Midterm
To earn the extra credit, one of the following has to hold true. Please circle and sign.
A I spent 3 or more hours on the practice midterm.
B I spent fewer than 3 hours on the pract
CS188 Fall 2014 Section 2: A* and Heuristics
1
Knights Path
A knight is a chess piece where each move takes the piece 1 square in one direction and 2 squares in an orthogonal
direction. We want to guide our knight to its goal state in as little moves as p
CS 188
Fall 2011
Introduction to
Articial Intelligence
Midterm Exam
INSTRUCTIONS
You have 3 hours.
The exam is closed book, closed notes except a one-page crib sheet.
Please use non-programmable calculators only.
Mark your answers ON THE EXAM ITSELF.
CS 188: Artificial Intelligence
Spring 2011
Lecture 7: Minimax and Alpha-Beta
Search
2/9/2011
Pieter Abbeel UC Berkeley
Many slides adapted from Dan Klein
1
Announcements
W1 out and due Monday 4:59pm
P2 out and due next week Friday 4:59pm
2
1
Overview
Det
Last name:_ First name:_ SID:_ Class account login:_
Collaborators: _
CS188 Spring 2011 Written 1: Search and CSPs
Due: Monday 2/14, 5:30pm either at the beginning of lecture or in 283 Soda Drop Box. Zero slip time.
Policy: See course webpage.
1
[7pts] Al
Inference in Bayesian networks
Chapter 14.45
Chapter 14.45
1
Outline
Exact inference by enumeration Exact inference by variable elimination Approximate inference by stochastic simulation Approximate inference by Markov chain Monte Carlo
Chapter 14.45
2
I
A simple knowledge-based agent
function KB-Agent( percept) returns an action static: KB, a knowledge base t, a counter, initially 0, indicating time Tell(KB, Make-Percept-Sentence( percept, t) action Ask(KB, Make-Action-Query(t) Tell(KB, Make-Action-Sente
Example
Topology of network encodes conditional independence assertions:
Bayesian networks
Weather
Cavity
Toothache
Catch
Chapter 14.13 W eather is independent of the other variables T oothache and Catch are conditionally independent given Cavity
Chapter
Inference by enumeration
Slightly intelligent way to sum out variables from the joint without actually constructing its explicit representation
Inference in Bayesian networks
B A J M E
Chapter 14.45 Rewrite full joint entries using product of CPT entries:
Phones
All human speech is composed from 40-50 phones, determined by the conguration of articulators (lips, teeth, tongue, vocal cords, air ow)
Speech recognition (briefly)
ARPAbet designed for American English Chapter 15, Section 6 [iy] [ih] [ey] [ao] [o
Communication and Language
Chapter 22
Chapter 22
1
Outline
Communication Grammar Syntactic analysis Problems
Chapter 22
2
Communication
Classical view (pre-1953): language consists of sentences that are true/false (cf. logic) Modern view (post-1953): lan
Communication
"Classical" view (pre-1953): language consists of sentences that are true/false (cf. logic)
Communication and Language
Wittgenstein (1953) Philosophical Investigations Austin (1962) How to Do Things with Words Searle (1969) Speech Acts Why?
Statistical learning
Chapter 20, Sections 13
Chapter 20, Sections 13
1
Outline
Bayesian learning Maximum a posteriori and maximum likelihood learning Bayes net learning ML parameter learning with complete data linear regression
Chapter 20, Sections 13
2
Example
Suppose there are ve kinds of bags of candies: 10% are h1: 100% cherry candies 20% are h2: 75% cherry candies + 25% lime candies 40% are h3: 50% cherry candies + 50% lime candies 20% are h4: 25% cherry candies + 75% lime candies 10% are h5: 100% l
Markov processes (Markov chains)
Construct a Bayes net from these variables: parents? Markov assumption: Xt depends on bounded subset of X0:t1
Temporal probability models
First-order Markov process: P(Xt|X0:t1) = P(Xt|Xt1) Second-order Markov process: P(X
Rational preferences
Idea: preferences of a rational agent must obey constraints. Rational preferences behavior describable as maximization of expected utility Constraints: Orderability (A B) (B A) (A B) Transitivity (A B) (B C) (A C) Continuity A B C p [
Speech recognition (briefly)
Chapter 15, Section 6
Chapter 15, Section 6
1
Outline
Speech as probabilistic inference Speech sounds Word pronunciation Word sequences
Chapter 15, Section 6
2
Speech as probabilistic inference
Its not easy to wreck a nice be
Methods for handling uncertainty
Default or nonmonotonic logic: Assume my car does not have a at tire Assume A25 works unless contradicted by evidence Issues: What assumptions are reasonable? How to handle contradiction? Rules with fudge factors: A25 0.3
Universal instantiation (UI)
Every instantiation of a universally quantied sentence is entailed by it: v Subst(cfw_v/g, ) for any variable v and ground term g E.g., x King(x) Greedy(x) Evil(x) yields Chapter 9 King(John) Greedy(John) Evil(John) King(Richa
CS188 Spring 2017 Section 11: ML
Pacman and Mrs. Pacman have been searching for each other in the Maze. Mrs. Pacman has been pregnant with a
baby, and just this morning she has given birth to Pacbaby (Congratulations, Pacmans!).
Because Pacbaby was born b
CS188 Spring 2017 Section 11: ML
Pacman and Mrs. Pacman have been searching for each other in the Maze. Mrs. Pacman has been pregnant with a
baby, and just this morning she has given birth to Pacbaby (Congratulations, Pacmans!).
Because Pacbaby was born b
CS188 Spring 2017 Section 12: ML
Pacman and Mrs. Pacman have been searching for each other in the Maze. Mrs. Pacman has been pregnant with a
baby, and just this morning she has given birth to Pacbaby (Congratulations, Pacmans!).
Because Pacbaby was born b
Section
12
NeuralHandout
Networks
Deep Learning
Introduction
to 13
CS
188 Spring
CS188
2017 Section
Spring 2016 Artificial Intelligence
1
Neural Network Representations
You are given a number of functions which are graphed below, from a to h. For the Neur
Section
12
NeuralHandout
Networks
Deep Learning
Introduction
to 13
CS
188 Spring
CS188
2017 Section
Spring 2016 Artificial Intelligence
1
Neural Network Representations
You are given a number of functions which are graphed below, from a to h. For the Neur