This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: CS195f Homework 3 Mark Johnson and Erik Sudderth Homework due at 2pm, 5th November 2009 This problem set asks you to investigate exponential or Maximum Entropy classifiers. These involve probability distributions of the form: P( y  x ) = 1 Z x ( w ) exp ( w f ( y,x )) , where Z x ( w ) = X y Y exp ( w f ( y ,x )) where: y Y is the class label we want to predict, x X are the conditioning or predictive variables, f ( y,x ) IR m is an mdimensional feature vector for pair ( y,x ), and w IR m is an mdimensional weight vector, where w j is the weight corresponding to feature f j ( y,x ). Learning MaxEnt classifiers involves finding the weight vectors w given training data D and the vector of feature functions f . Well use a uniform Gaussian prior on the feature weights w , i.e.: P( w ) exp ( w w ) where is a usersettable parameter that controls the degree of regularization. Question 1: 1. Give an expression for the regularized negative log conditional likelihood of a generic data set D = (( x 1 ,y 1 ) ,..., ( x n ,y n )) , ignoring any terms and factors that do not depend on w . 2. Give an expression for the derivative of the regularized negative log likelihood with respect to a feature weight w j . Now we will construct an estimator for the feature weights w . We will use the Nursery data set that was used in previous exercises, which you can find in 1 /course/cs195f/asgn/naive_bayes/handout/nursery/nursery.mat/course/cs195f/asgn/naive_bayes/handout/nursery/nursery....
View
Full
Document
This note was uploaded on 11/03/2009 for the course CS 195f taught by Professor Johnson during the Spring '09 term at SanfordBrown Institute.
 Spring '09
 johnson
 Machine Learning

Click to edit the document details