codegraph - Learning Novel Concepts in the Kinship Domain...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Learning Novel Concepts in the Kinship Domain Daniel M. Roy Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Abstract This paper addresses the role that novel concepts play in learning good theories. To concretize the discussion, I use Hintons kinship dataset as motivation throughout the paper. The standpoint taken in this paper is that the most compact theory that describes a set of examples is the preferred theoryan explicit Occams Razor. The kinship dataset is a good test-bed for thinking about relational concept learning because it contains interesting patterns that will undoubtedly be part of a compact theory describing the examples. To begin with, I describe a very simple computational level theory for inductive theory learning in first-order logic that precisely states that the most compact theory is preferred. In addition, I illustrate the obvious result that predicate invention is a necessary part of any system striving for compact theories. I present derivations within the Inductive Logic Programming (ILP) framework that show how the intuitive theories of family trees can be learned. These results suggest that encoding regular equivalence directly into the training sets of ILP systems can improve learning performance. To investigate theories resulting from optimization, I devise an algorithm that works with a very strict language bias allowing all consistent rules to be entertained and explicitly optimized over for small datasets. The algorithm, which can be viewed as a special case implementation of ILP, is capable of learning a theory of kinship comparable in compactness to the intuitive theories humans use regularly. However, this alternative approach falls short as it is incapable of inventing the unary predicate sex to learn a more compact theory. Finally, I comment on the philosophical position of extreme nativism in light of the ability of these systems to invent primitive concepts not present in the training data. Introduction eral because of the semi-decidability of first order logic, there has been great success at the algorithmic level in the field of Inductive Logic Programming (ILP). The problem ILP ad- The core of the intuitive theory of kinship in western culture dresses is: learn a first-order logic theory that, together with is the family tree, from which any number of queries about provided background knowledge, logically entails a set of ex- kinship relationships can be answered. Could a machine, pre- amples (Nienhuys-Cheng and de Wolf, 1997). sented with the kinship relationships between individuals in a family, learn the intuitive family tree representation? Using the ILP framework, it is possible to show how inverse resolution can devise all three of the basis set predicates that This paper focuses heavily on a dataset introduced in Hinton comprise the family tree representation. The most interesting (1986). In this dataset, a group of individuals are related by...
View Full Document

This note was uploaded on 09/17/2009 for the course IT it771 taught by Professor Jenisha during the Fall '09 term at University of Advancing Technology.

Page1 / 12

codegraph - Learning Novel Concepts in the Kinship Domain...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online