Lecture 23-Learning

Lecture 23-Learning - CS 561 Artificial Intelligence...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
CS 561: Artificial Intelligence Instructor: Sofus A. Macskassy, [email protected] TAs: Nadeesha Ranashinghe ( [email protected] ) William Yeoh ( [email protected] ) Harris Chiu ( [email protected] ) Lectures: MW 5:00-6:20pm, OHE 122 / DEN Office hours: By appointment Class page: http://www-rcf.usc.edu/~macskass/CS561-Spring2010/ This class will use http://www.uscden.net/ and class webpage - Up to date information - Lecture notes - Relevant dates, links, etc. Course material: [AIMA] Artificial Intelligence: A Modern Approach, by Stuart Russell and Peter Norvig. (2nd ed)
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
CS561 - Lecture 23 - Macskassy - Spring 2010 2 Learning [AIMA Ch. 18] Learning agents Inductive learning Decision tree learning Measuring learning performance
Background image of page 2
CS561 - Lecture 23 - Macskassy - Spring 2010 3 Learning Learning is essential for unknown environments, i.e., when designer lacks omniscience Learning is useful as a system construction method, i.e., expose the agent to reality rather than trying to write it down Learning modifies the agent's decision mechanisms to improve performance
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
CS561 - Lecture 23 - Macskassy - Spring 2010 4 Learning agents
Background image of page 4
CS561 - Lecture 23 - Macskassy - Spring 2010 5 Learning elements Design of learning element is dictated by } what type of performance element is used } which functional component is to be learned } how that functional component is represented } what kind of feedback is available Example scenarios: Supervised learning : correct answers for each instance Reinforcement learning : occasional rewards Performance element Component Representation Feedback Alpha−beta search Eval. fn. Weighted linear function Win/loss Logical agent Transition model Successor−state axioms Outcome Utility−based agent Transition model Dynamic Bayes net Outcome Simple reflex agent Percept−action fn Neural net Correct action
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
CS561 - Lecture 23 - Macskassy - Spring 2010 6 Inductive learning (a.k.a. Science) Simplest form: learn a function from examples ( tabula rasa ) f is the target function An example is a pair x , f(x) , e.g., , +1 Problem: find a hypothesis h such that h ¼ f given a training set of examples ( This is a highly simplified model of real learning: { Ignores prior knowledge { Assumes a deterministic, observable \environment" { Assumes examples are given { Assumes that the agent wants to learn f |why? ) O O X X X
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 08/26/2010 for the course CSCI 561 taught by Professor Staff during the Spring '08 term at USC.

Page1 / 27

Lecture 23-Learning - CS 561 Artificial Intelligence...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online