hw5Sol - CS 440: Introduction to AI Homework 5 Solution...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
CS 440: Introduction to AI Homework 5 Solution Due: November 30 11:59PM, 2010 Your answers must be concise and clear. Explain sufficiently that we can easily determine what you understand. We will give more points for a brief interesting discussion with no answer than for a bluffing answer. Please email your solution to the TA at cs440ta@cs.illinois.edu. 1 Naive Bayes MPG Horsepower Weight Acceleration Displacement High LT100 Light Wow Small Low GT100 Heavy Poor Large Moderate LT100 Light Wimpy Medium Moderate LT100 Light Wimpy Medium High LT100 Light Wimpy Small Low GT100 Heavy Wimpy Medium Moderate LT100 Heavy Wimpy Medium Low GT100 Heavy Wimpy Large Moderate LT100 Heavy Poor Medium Moderate GT100 Heavy Wow Large Low GT100 Heavy Poor Large Moderate GT100 Light Wimpy Medium Low GT100 Heavy Wimpy Large Low GT100 Heavy Wow Medium High LT100 Light Wimpy Small Moderate GT100 Heavy Poor Medium Table 1: A data set. LT100: Less than 100 HP. GT100: Greater than 100 HP. Table 1 shows an automobile dataset. The data consists of five features (MPG, Horsepower, Weight, Acceleration, Displacement). Each is binary or trinary. Assume for your Nave Bayes net that MPG, Horsepower, Weight, Acceleration are conditionally independent from each other given Displacement . We will learn a Nave Bayes net from the dataset in Table 1 and use it to estimate missing values of new examples 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
In naive Bayes net, attributes ( X ) are conditionally independent of each other given a class variable ( Y ). Y can be infered by the following Naive Bayes classification rule: Y argmax y k P ( Y = y k ) Y i P ( X i | Y = y k ) (1) In order to estimate parameters, we use Maximum Likelihood Estimation (MLE) with add-one smoothing as follows: ˆ P ( Y = y k ) = # D { Y = y k } + 1 | D | + J (2) ˆ P ( X i = x ij | Y = y k ) = # D { X i = x ij Y = y k } + 1 # D { Y = y k } + K (3) where the # D { x } represents the number of elements in the data set D that satisfy property x, J is the number of distinct values Y can take on, and K is the number of distinct values X i can take on. In addition to our text, you may find it useful to visit http://www.cs.cmu.edu/ ~ tom/mlbook/NBayesLogReg.pdf . 1. From all the examples in the table 1, estimate parameters.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 04/30/2011 for the course ECE 448 taught by Professor Levinson during the Spring '08 term at University of Illinois, Urbana Champaign.

Page1 / 7

hw5Sol - CS 440: Introduction to AI Homework 5 Solution...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online