intro[1] - CPS 170 Introduc0on Ron Parr Contact...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CPS 170 Introduc0on Ron Parr Contact Informa0on •  Professor –  Ron Parr –  D209 LSRC, [email protected], 660 ­6537 –  Office hours: Tuesday 1 ­2, Wednesday 2 ­3 •  TA –  Mar0n Azizyan –  217 Hudson Hall, [email protected], 660 ­6576 –  Office hours: Monday 2 ­3, Wednesday 3 ­4 1 About Me •  My tenth year at Duke •  Bachelor’s degree in philosophy (Princeton) –  Philosophy of mind •  Ph.D. in computer science (Berkeley) –  Hierarchical planning under uncertainty •  Current interests: –  –  –  –  –  –  Planning under uncertainty Probabilis0c reasoning Game theory Reinforcement learning Robo0cs Sensing & Vision Requirements •  Good programming skills: –  C/C++, Java, Matlab or other high level language •  Other expecta0ons –  Ability to do short proofs –  Basic probability concepts (though we will review all of this) –  Basic algorithmic concepts •  Complexity  ­ O() •  Analysis of algorithms –  Math •  Basic calculus (par0al deriva0ves) 2 Major Topics Covered •  Search –  Uninformed search, informed search, CSPs •  Game Playing –  minimax, alpha ­beta search, introduc0on to game theory •  Logic and Knowledge Representa0on –  Proposi0onal ogic, First order logic, theorem proving •  Reasoning under uncertainty –  Probability, Bayes nets, HMMs & tracking •  Planning –  Classical planning, Decision theory, stochas0c planning(MDPs) •  Introduc0on to robo0cs •  Introduc0on to machine learning Major Topics Not Covered •  Natural Language •  Vision 3 Class Mechanics •  Textbook: Ar&ficial Intelligence, A Modern Approach, Russell & Norvig (third edi0on) •  Homeworks: 40% –  Discussion OK, write ­up must be your own comments on next slide) (see •  Midterm: 30% –  Closed book, in class, no collabora0on •  Final: 30% –  Closed book, finals week, no collabora0on •  Homeworks will be a mix of short proofs, algorithm design/analysis, and small scale programming projects Academic Honesty •  Brainstorming with friend is encouraged! •  (But don’t confuse brainstorming with lelng your smart friends tell you the answers) •  You must write up solu0ons on your own •  Always ask before using code that is not your own •  Always give credit to original authors if you incorporate code that is not your own into your solu0ons 4 Cool AI Applica0ons •  •  •  •  •  •  •  •  •  •  Games (deep blue, solving checkers, video games) Handwri0ng recogni0on (PDAs, tablet PCs, post office) Speech recogni0on (my car, voice jail, Dragon) MS Windows diagnos0cs E ­commerce (collabora0ve filtering) Mobile robo0cs (grand/urban challenges) Space explora0on Logis0cs planning Lots of Google tools Computer security So, what is this AI stuff? •  Make machines think like humans –  Is this enough? –  Is this too much? •  Make machines act like humans –  Is this sufficient? –  Is this desirable? 5 Turing Test •  Computer must be indis0nguishable from a human based upon wriqen exchanges –  Does this imply intelligence? –  How could the computer cheat? –  Does intelligence imply a certain type of computa0on? –  Could an intelligent machine s0ll fail the test? •  Does our no0on of intelligence transcend our concept of humanity? What Intelligence Isn’t •  It’s not about fooling people •  Fooling people is (in some cases) easy, e.g., eliza: hqp://chayden.net/eliza/Eliza.html •  More recent efforts: hqp://chatbots.org/ 6 The Moving Target •  What is human intelligence? –  At one 0me, calcula0ng ability was prized •  Now it is deprecated •  Calculators permiqed earlier and earlier in school –  Chess was once viewed as an intelligent task •  Now, massively parallel computers use not very intelligent search procedures to beat grand masters •  Some say Deep Blue wasn’t AI –  Learning once thought uniquely human •  Now it’s a well ­developed theory •  Best backgammon player is a learning program •  No litmus test for intelligence or is biological chauvinism? Ar0ficial Flight •  Even seemingly unambiguous terms such as “flight” were subject to biological chauvinism. •  Problem: Flight was largely irreducible (no easier subproblems) •  Demonstrable, unambiguous success ended chauvinism – could the same be true for AI? 7 Intelligence: A web of abili0es •  Intelligence is hard to define in isola0on •  Mixture of special purpose and general purpose hardware –  Special purpose •  Recognizing visual paqerns •  Learning and reproducing language –  General Purpose •  Theorem proving •  Learning and excelling at new tasks •  Seamless integra0on •  Solve pieces of the puzzle isn’t enough, but it is measurable Why is it hard: Ideal Intelligence •  Intelligence means making op0mal choices •  Is anything truly intelligent? •  How do we define op0mality? •  It took decades for people to realize that this was a thorny issue. Let’s see how this played out: 8 Early Efforts: General (top down) •  Good news: –  Many problems can be formalized as instances of •  Search •  Logical deduc0on –  The space of all proofs is a (somewhat) searchable space –  Knowledge base + theorem proving provide a sa0sfying picture or reasoning, knowledge and learning •  Tell PC: –  All men are mortal –  Socrates is a man •  Ask: –  Is Socrates mortal? Bad news for general methods •  Searching in proof space is hard •  Represen0ng knowledge is hard –  What is a chair? •  Knowledge interconnected in strange ways –  –  –  –  Chairs People Gravity Customs… •  Early efforts were too general, ambi0ous –  In most cases, could not solve the knowledge representa0on problem –  Even if KR problem was solved, theorem proving problem was intractable 9 Early Efforts: Special Purpose (boqom up) Methods •  Neural networks –  Aqempted to reproduce func0on of human neurons –  Highly abstracted from actual “wetware” •  Proverbial wing ­flapping flying machine? •  Success at reproducing low ­level tasks –  Paqern recogni0on, associa0ve memory •  Nearly became a religion •  Huge gap between low level and high level •  Early efforts were too specific Overpromising and the AI Winter •  Years of –  Naïve op0mis0sm –  Unrealis0c assessments of challenges –  Poor scien0fic/academic discipline •  Lead to (early 90’s) –  –  –  –  Backlash Reduced government funding Reduced investment from industry The “AI Winter” 10 AI Moving Forward •  More science/engineering •  Less philosophy •  Study broad classes of problems that would tradi0onally require human intelligence (but not intelligence itself) •  Restrict problem somewhat: –  Develop a crisp input specifica0on –  Develop a well ­defined success criterion •  Develop results with –  Provable proper0es –  Broad applicability •  Extract and study underlying principles behind successful methods Eye on the prize •  AI’s narrower focus has earned the field credibility and prac0cal successes, yet •  Some senior researchers complain that we have taken our eye off the prize: –  Too much focus on specific problems –  Lack of interest in general intelligence •  Are we ready to tackle general intelligence? 11 Conclusion •  We want to solve hard problems that would tradi0onally require human ­level intelligence. (Most we consider are at least NP ­hard.) •  We want to be good computer scien0sts, so we force ourselves to use well ­defined input/output specifica0ons. •  We aim high, but we let ourselves simplify things if it allows us to produce a general ­purpose tool with well ­understood proper0es. 12 ...
View Full Document

This note was uploaded on 02/17/2012 for the course COMPSCI 170 taught by Professor Parr during the Spring '11 term at Duke.

Ask a homework question - tutors are online