{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

hw3 - CSCI 5512 Artificial Intelligence II Spring 2011...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CSCI 5512: Artificial Intelligence II Spring 2011 Homework #3, Due May 2nd 1. (20 points) This question considers the value of perfect information (VPI) VPI(Ej) which evaluates the value of additional information Ej given existing information E. (a) (8 points) Show that VPI is nonnegative, i.e., VPI(Ej) 0,j,E. (b) (12 points) Show that VPI is order independent. Given two pieces of information, the value of both is independent of the order that it comes in. 2. (20 points) The following payoff matrix shows a game between politicians and the Federal Reserve. Politicians can expand or contract fiscal policy, while the Fed can expand or contract monetary policy. Each side also has preferences for who should do what. The payoffs shown are simply the rank orderings: 9 for first choice through 1 for last choice. (i) (12 points) Find the Nash equilibrium for the game in pure strategies. (ii) (4 points) Is there a dominant strategy equilibrium in the game? If yes, find the dominant strategy equilibrium; if no, explain why the Nash equilibrium is not also a dominant strategy equilibrium. (iii) (4 points) We say that an outcome is Pareto optimal if there is no other outcome that all players would prefer. Is the pure strategy Nash equilibrium Pareto optimal? 3. (40 points) (Programming Assignment) Consider the Restaurant dataset given in the table below: (a) (15 points) Implement a decision tree learning algorithm dtree4, and learn a decision tree (of depth at most 4) from the given training data. What is the training set error of the learned decision tree? (b) (15 points) Implement a 2decision list learning algorithm dlist2, and learn a decision list from the given training data. What is the training set error of the learned decision tree? (c) (10 points) Show that the hypothesis space of kdecision lists include all decision trees of depth k. 4. (40 points) (Programming Assignment) We will use the Pendigits dataset from the UCI machine learning repository. The dataset contains 10992 examples, each with 16 integer features in the range 0100 and a unique class label in {0,1,2,...,9}. The problem is a 10class classification problem where a class corresponds to a handwritten number. The training set contains 7494 data points and can be downloaded in plain text from: http://archive.ics.uci.edu/ml/machinelearningdatabases/pendigits/pendigits.tra The test set contains 3498 data points and can be downloaded in plain text from: http://archive.ics.uci.edu/ml/machinelearningdatabases/pendigits/pendigits.tes Each data point in the training as well as test set files is a line of 17 commaseparatedvalues--the first 16 contains the input feature vector and the last one is the class label. For the training set, the class label is to be used for training whereas for the test set, the class label is to be only used for evaluation. You can find more information about the dataset from: http://archive.ics.uci.edu/ml/datasets/PenBased+Recognition+of+Handwritten+Digits Train a feedforward neural network nnet with one hidden layer using the backpropagation algorithm. Choose the number of hidden nodes appropriately. Set the number of output nodes to 10, one for each class. Using 10 different initial choices for the starting weights, report: (i) Mean accuracy on the test set along with standard deviation, (ii) Accuracy on the test set using the initial choice of weights which performed best on the training set (iii) Best accuracy on the test set among all 10 initial choices of weights. ...
View Full Document

{[ snackBarMessage ]}