DMtutorial9 - h and calculate ˆ A ( x new ) = ∑ n i =1 K...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
Tutorial 10 1. Both linear regression model and separating hyperplane in classification (in e.g. SVM) are looking for a linear combination of covariates. Explain their difference in the estimation and the rules in prediction. 2. we can use support vector machine learning for function estimation. Consider the motorcycle data. plot the fitted curve. discuss the role of gamma in SVM. 3. For the leukemia gene expression data ( (training points) . Use sample 11—33 as training set and the others as validation set, compare SVM and FDA. 4. For data set X Y ( classes ) X 1 = ( x 11 ,...,x 1 p ) ( A 1 ,B 1 ) = (0 , 1) X 2 = ( x 21 ,...,x 2 p ) ( A 2 ,B 2 ) = (0 , 1) ... X n = ( x n 1 ,...,x np ) ( A n ,B n ) = (1 , 0) consider the following classification scheme: for a new sample x new = ( x 1 ,...,x p ), choose
Background image of page 1
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: h and calculate ˆ A ( x new ) = ∑ n i =1 K ( || X i-x new || /h ) A i ∑ n i =1 K ( || X i-x new || /h ) and ˆ B ( x new ) = ∑ n i =1 K ( || X i-x new || /h ) B i ∑ n i =1 K ( || X i-x new || /h ) Define the probability that x new ∈ A and B respectively as p A ( x new ) = exp( ˆ A ( x new )) exp( ˆ A ( x new )) + exp( ˆ B ( x new )) , p B ( x new ) = exp( ˆ B ( x new )) exp( ˆ A ( x new )) + exp( ˆ B ( x new )) We classify x new ∈ A if p A ( x new ) > p B ( x new ) , and x new ∈ B otherwise. Consider the banknotes data with ( training set and ( validation set , with h = 1 what is the classification error? try different h . 1...
View Full Document

Ask a homework question - tutors are online