kbai_ebook.pdf - KBAI EBOOK KNOWLEDGE-BASED ARTIFICIAL...

This preview shows page 1 out of 357 pages.

Unformatted text preview: KBAI EBOOK: KNOWLEDGE-BASED ARTIFICIAL INTELLIGENCE KBAI Ebook: Knowledge-based Artificial Intelligence KBAI: CS7637 course at Georgia Tech: Course Creators and Instructors: Ashok Goel, David Joyner. Click here for Course Details Electronic Book (eBook) Designers: Bhavin Thaker, David Joyner, Ashok Goel. Last updated: October 6, 2016 Ashok Goel David Joyner Bhavin Thaker Page 1 of 357 c 2016 Ashok Goel and David Joyner KBAI EBOOK: KNOWLEDGE-BASED ARTIFICIAL INTELLIGENCE Copyright: Ashok Goel and David Joyner, Georgia Institute of Technology. All rights reserved. No part of this document may be reproduced, stored in any retrieval system, or transmitted in any form or by any means without prior written permission. Page 2 of 357 c 2016 Ashok Goel and David Joyner KBAI EBOOK: KNOWLEDGE-BASED ARTIFICIAL INTELLIGENCE YouTube Playlists: Part 1 of 5 Part 2 of 5 Part 3 of 5 Part 4 of 5 Part 5 of 5 NOTE: Lessons 02, 04, 14 and 25 have a correspondence problem between YouTube links, transcripts, and slides. Additionally, the following videos have known incomplete transcripts: Lesson 3 - Exercise: Constructing Semantic Nets I, Lesson 5 - Exercise: Block Problem I, Lesson 24 - Example: Goal-Based Autonomy, and Lesson 25 - Raven’s Progressive Matrices. These will get fixed when the YouTube playlist is fixed by Udacity and reloaded into this KBAI Ebook. Page 3 of 357 c 2016 Ashok Goel and David Joyner LESSON 01 - INTRODUCTION TO KNOWLEDGE-BASED AI Lesson 01 - Introduction to Knowledge-Based AI To understand the intelligence functions at a fundamental level, I believe, would be a scientific achievement on the scale of nuclear physics, relativity, and molecular genetics. – James Albus. Education is not the piling on of learning, information, data, facts, skills, or abilities – that’s training or instruction – but is rather making visible what is hidden as a seed. – Thomas More. 01 - Introductions Click here to watch the video Figure 1: Introductions is for education and especially for using modern technology to deliver individualized personal educational experiences. It would be very difficult in very large classrooms. As we’ll see AI is not just the subject of this course, but it’s all a tool we’re using to teach this course. We had a lot of fun putting this course together. We hope you enjoy it as well. We think of this course as an experiment as well. We want to understand how students learn in online classrooms. So if you have any feedback please share it with us. 02 - Preview Click here to watch the video Hello, and welcome to CS 7637, knowledgebased artificial intelligence/cognitive systems. My name is Ashok Goel. My name is David Joyner. I’m a Professor of Computer Science and Cognitive Science at Georgia Tech. I’ve been teaching knowledge-based AI for about 25 years. I’ve been doing research in this area for about 30. My personal passion is for computational creatively. Breathing air agents that human like Figure 2: Preview and creative in their own right. I’m, of course, developer with Udacity and I’m also finishing up my own PhD dissertation here at Georgia Tech So welcome to 7637 Knowledge Based AI. At with Ashok as my advisor. My personal passion the beginning of each lesson, we’ll briefly introPage 4 of 357 c 2016 Ashok Goel and David Joyner LESSON 01 - INTRODUCTION TO KNOWLEDGE-BASED AI duce a topic as shown in the graphics to the right. We’ll also talk about how the topic fits into the overall curriculum for the course. Today, we’ll be discussing AI in general, including some of the fundamental conundrums and characteristics of AI. We will describe four schools of AI, and discuss how knowledge based AI fits into the rest of AI. Next, we’ll visit the subtitle of the course, Cognitive Systems, and define an architecture for them. Finally, we’ll look at the topics that we’ll cover in this course in detail. it, we’ll discuss it later in the later in the class. Conundrum number four. The world is dynamic, knowledge is limited, but an AI agent must always begin with what it already knows. How then can an AI agent ever address a new problem? Conundrum number five. Problem solving, reasoning, and learning are complex enough, but explanation and justification add to the complexity. How then can we get an AI agent to ever explain or justify it’s decisions? 04 - Characteristics of AI Problems Click here to watch the video 03 - Conundrums in AI Click here to watch the video Figure 4: Characteristics of AI Problems Figure 3: Conundrums in AI Let’s start a recognition today. We’re discussing some of the biggest problems in AI. We obviously are not going to solve all of them today, but it’s good to start with a big picture. AI has several conundrums, I’m going to describe five of the main ones today. Conundrum number one. All intelligent agents have little computational resources, processing speed, memory size, and so on. But most interesting AI problems are computationally intractable. How then can we get AI agents to give us near real time performance on many interesting problems? Conundrum number two. All competition is local, but most AI problems have global constraints. How then can we get AI agents to address global problems using only local computation? Conundrum number three. Computation logic is fundamentally deductive, but many AI problems are abductive or inductive in their nature. How can we get AI agents to address abductive or inductive problems? If you do not understand some of these terms, like abduction, don’t worry about I hope our discussion of the big problems in AI didn’t scare you off, let’s bring the discussion down, closer to work. And talk about a few fundamental characteristics of AI problems. Number one, in many AI problems, data arrives incrementally not all the data comes right at the beginning. Number two, problems often have a recurring pattern, the same kinds of problems occur again and again. Number three, problems occur at many different levels of abstraction. Problem number four, many interesting AI problems are computationally intractable. Number five, the world is dynamic, it’s constantly changing but knowledge of the world is relative to static. Number six, the world is open ended but knowledge of the world is relatively limited. So, the question then becomes, how can we design air agents that can address air problems with these characteristics, those are the challenges we’ll discuss in this course 05 - Characteristics of AI Agents Click here to watch the video Page 5 of 357 c 2016 Ashok Goel and David Joyner LESSON 01 - INTRODUCTION TO KNOWLEDGE-BASED AI it differently, for which of these problems would you build an AI agent to solve? 07 - Exercise What are AI Problems Click here to watch the video Figure 5: Characteristics of AI Agents In addition to AI problems having several characteristics, AI agents too have several properties. Property number one. AI agents, have only a limited computing power, processing speed, memory size, and so on. Property number two. AI agents have limited sensors, they cannot perceive everything in the world. Property number three. AI agents have limited attention, they cannot focus on everything at the same time. Property number four. Computational logic is fundamentally deductive. Property number five. The world is large, but AI agents’ knowledge of the world is incomplete relative to the world. So, the question then becomes, how can AI agents with such bounded rationality address open-ended problems in the world? 06 - Exercise What are AI Problems Click here to watch the video Figure 6: Exercise What are AI Problems David, which one of these do you think are AI problems? Science has said that all of these are AI problems. All of these are things that we humans do on a fairly regular basis. And if the goal of artificial intelligence is to recreate human intelligence, then it seems like we need to be able to design agents that can do any of these things. I agree. In fact, during this class we’ll design AI agents that can address each of these problems. For now, let us just focus on the first one. How to design an AI agent that can answer Jeopardy questions. 08 - Exercise AI in Practice Watson Click here to watch the video Let’s start with looking at an example of an AI agent in action. Many are you are family with Watson, the IBM program that plays Jeopardy. Some of you may not be, and that’s fine, we’ll show an example in a minute. When you watch Watson in action, try to think. What are some of the things Watson must know about? What are some of the things that Watson must be able to reason about in order to play Jeopardy? Write them down. And anytime you feel the pain, hey, this guy refrain. Don’t carry the world upon your shoulders. Watson? Who is, Jude? Yes. Olympic Oddities, for 200. Milorad Cavic almost upset this man’s perfect 2008 Olympics, losing to him by one-hundredth of a second. Watson. Who is Michael Phelps. Yes, go. Name the decade for 200. Disneyland opens and the peace symbol is create. Ken. What are the 50s? Yes. Final Frontiers for 1,000, Alex. Tickets aren’t needed for this event, a black hole’s boundary from which matter can not escape. Watson. What is event horizon? 09 - Exercise AI in Practice Watson Now that we have talked about the characterClick here to watch the video istics of AI, agents in AI problems. Let us talk a little about for what kind of problems might David, what did you write it down? So I you build in AI agent. On the right are several said that the four fundamental things a Watson tasks. Which are these AI problems? Or to put must be able to do to play Jeopardy are first Page 6 of 357 c 2016 Ashok Goel and David Joyner LESSON 01 - INTRODUCTION TO KNOWLEDGE-BASED AI read the clue, then search through it’s knowledge base, then actually decide on it’s answer, and then phrase it’s answer in the form of a question. That’s right. And during this course, we’ll discuss each part of David’s answer. 10 - What is Knowledge-Based AI Click here to watch the video Figure 7: What is Knowledge-Based AI learning. Once we learn, we can store it in memory. However, we need knowledge to learn. The more we know, the more we can learn. Reasoning requires knowledge that memory can provide access to. The results of reasoning can also go into memory. So, here are three processes that are closely related. A key aspect of this course on knowledge based AI is that we will be talking about theories of knowledge based AI that unify reasoning, learning, and memory. And sort of, discussing any one of the three separately as sometimes happens in some schools of AI. We’re going to try to build, unify the concept. These 3 processes put together, I will call them deliberation. This deliberation process is 1 part of the overall architecture of a knowledge based AI agent. This figure illustrates the older architecture of an AI agent. Here we have input in the form of perceptions of the world. And output in the form of actions in the world. The agent may have large number of processes that map these perceptions to actions. We are going to focus right now on deliberation, but the agent architecture also includes metacognition and reaction, that we’ll discuss later Let us look at the processes that Watson may be using a little bit more closely. Clearly Watson is doing a large number of things. It is trying to understand natural language sentences. It is trying to generate some natural language sentences. It is making some decisions. I’ll group all of these things broadly under reasoning. Reasoning is a fundamental process of knowledge based data. A second fundamental process of knowledge based AIs learning. What simply is learning also? It perhaps gets a right answer to some 11 - Foundations The Four Schools of AI questions, and stores that answer somewhere. If Click here to watch the video it gets a wrong answer, and then once it learns about the right answer, it stores the right answer also somewhere. Learning to is a fundamental process of knowledge based AI. A third fundamental process of knowledge based ai is memory. If you’re going to learn something, that knowledge that you’re learning has to be store somewhere, in memory. If you’re going to reason using knowledge, then that knowledge has to accessed from somewhere, from memory. From memory process it will store, what we learn as well as provide access to knowledge it will need for reasoning. These three forms of processes of learning, memory, and reasoning are intimately connected. We learn, so that we can reason. The result of reasoning often. Result in additional Figure 8: Foundations The Four Schools of AI Page 7 of 357 c 2016 Ashok Goel and David Joyner LESSON 01 - INTRODUCTION TO KNOWLEDGE-BASED AI Figure 9: Foundations The Four Schools of AI Another way of understanding what is knowledge based AI, is to contrast it with the other schools of thought in AI. We can think in terms of a spectrum. On one end of the spectrum, is acting. The other end of the spectrum is thinking. As an example, when you’re driving a car, you’re acting on the world. But when you are planning what route to take, you’re thinking about the world. There is a second dimension for distinguishing between different schools of thought of AI. At one end of the spectrum we can think of AI agents that are optimal. At the other end of the spectrum, we can think of air agents that act and think like humans. Humans are multifunctional, they have very robust intelligence. That intelligence need not be optimal relative to any one task, but it’s very general purpose, it works for a very large number of tasks. Were as we can pick up here, agents on the other side which are optimal for a given task. Given these 2 axis we get 4 quadrants. Starting from the top left and going counter clockwise, here are Agents that think optimally, Agents that act optimally, Agents that act like humans. And agents that think like humans. In this particular course in knowledged based AI, we’re interested in agents that think like humans. Let us take a few examples to make sure that we understand this four quadrants world. Here are some well known computational techniques. Consider many machine learning algorithms. These algorithms analyse large amounts of data, and determine patterns of the regularity of that data. Well I might think of them as being in the top left quadrant. This is really doing thinking, and they often are optimal, but they’re not necessarily human like. Airplane autopilots. They would go under acting optimally. They’re suddenly acting in the world, and you want them to act optimally. Improvisational robots that can perhaps dance to the music that you play, they’re acting, and they are behaving like humans, dancing to some music. Semantic web, a new generation of web technologies in which the web understands the various pages, and information on it. I might put that under thinking like humans. They are thinking. Not acting in the world. And is much more like humans, than, let’s say, some of the other computational techniques here. If you’re interested in reading some more about these projects, you can check out the course materials. Where we’ve provided some recent papers on these different computational techniques. There’s a lot of cutting edge research going on here at Georgia Tech and elsewhere, on these different technologies. And if, if you really are interested in this, this is something where we’re always looking for contributors. 12 - Exercise What is KBAI Click here to watch the video So one thing that many students in this class are probably familiar with is Sebastian Thrun’s Robotics class on autonomous vehicles. David, where do you think an autonomous vehicle would fall on the spectrum? So it seems to me like an autonomous vehicle definitely moves around in the world so it certainly acts in the world. And driving is a very human-like behavior so I’d say that it acts like a human. What do you think? Do you agree with David? 13 - Exercise What is KBAI Click here to watch the video Page 8 of 357 Figure 10: Exercise What is KBAI c 2016 Ashok Goel and David Joyner LESSON 01 - INTRODUCTION TO KNOWLEDGE-BASED AI David, do we really care whether or not the autonomous vehicle thinks and acts the way we do? I guess, now that you mention it, as long as the vehicle gets me to my destination and doesn’t run over anything on the way, I really don’t care if it does think the way I do. And if you look at the way I drive, I really hope it doesn’t act the way I do. So the autonomous vehicle may really belong to the acting rationally side of the spectrum. At the same time, looking at the way humans write might help us design a robot. And looking at the robot design might help us reflect on human cognition. This is one of the patterns of knowledge-based data. 14 - Exercise The Four Schools of AI Click here to watch the video Figure 11: Exercise The Four Schools of AI Let us do an exercise together. Once again, we have the four quadrants shown here, and at the top left are four compression artifacts. I’m sure you’re familiar with all four of them. C3PO is a fictitious artifact from Star Wars. Can we put these four artifacts in the quadrants to which they best belong? 15 - Exercise The Four Schools of AI Click here to watch the video What do you think about this David? So starting with Roomba, I would put Roomba in the bottom left. It definitely acts in the world. But it definitely doesn’t act like I do. It crisscrosses across the floor until it vacuums everything up. So we’re going to say that’s acting optimally. C-3PO is fluent in over 6 million forms of communication. And that means that it interacts with human and other species very often. In order to do that, it has to understand natural sentences and put its own knowledge back into natural sentences. So, it has to act like humans. Apple’s virtual assistant, Siri, doesn’t act in the world. So she is more on the thinking end of the spectrum but, like C-3PO, she has to interact with humans. She has to read human sentences and she has to put her own responses back into normal vernacular. So we’re going to say that she thinks like humans. Google Maps plots your route from your origin to your destination. So it’s definitely doing thinking, it’s not doing any acting in the world. But we don’t really care if it does the route planning like we would do it. So we would say it does its route planning optimally. It takes into consideration traffic. Current construction, different things like that, where we would probably think of the routes we have taken in the past. So Google Maps thinks optimally. That is a good answer David, I agree with you, but not here that some aspects of Siri, may well belong in some of the other quadrants. So, putting under 3 sounds plausible. But Siri might also be viewed as, perhaps, acting when it gives you a response. Siri, some aspects of Siri might also be optimal, not necessarily like humans. So if you’d like to discuss where these technologies belong on these spectrums or, perhaps, discuss where some other AI technologies that you’re familiar with belong on these spectrums, feel free to head on over to our forums where you can bring up your own technologies and discuss the different. Ways in which they fit into the broader school of AI. 16 - What are Cognitive Systems Click here to watch the video I’m sure you have noticed that this class has a subtitle, cognitive systems. Let’s talk about this term and break it down into its components. Cognitive, in this context, means dealing with human-like intelligence. The ultimate goal is to dwell up human level, human-like intelligence. Systems, in this context, means having multiple interacting components, such as learning, reasoning and memory. Cognitive systems, they are systems that exhibit human level, human- Page 9 of 357 c 2016 Ashok Goel and David Joyner LESSON 01 - INTRODUCTION TO KNOWLEDGE-BASED AI like intelligence through ...
View Full Document

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture