Stat841f09 - Wiki Course Notes

# Where treating it as in case 1 above note that when

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: e, ahead of time we would have to assume a data point belongs to one class or the other. All classes therefore need to have the same shape for classification to be applicable using this method. So this method works for LDA. If the classes have different shapes, in another word, have different covariance , can we use the same method to transform all data points to ? The answer is NO. Consider that you have two classes with different shapes, then consider transforming them to the same shape. Given a data point, justify which class this point belongs to. The question is, which transformation can you use? For example, if you use the transformation of class A, then you have assumed that this data point belongs to class A. Kernel QDA (http://portal.acm.org/citation.cfm?id=1340851) In real life, QDA is always better fit the data then LDA because QDA relaxes does not have the assumption made by LDA that the covariance matrix for each class is identical. However, QDA still assumes that the class conditional distribution is Gaussian which does not be the real case in practical. Another method- kernel QDA does not have the Gaussian distribution assumption and it works better. The Number of Parameters in LDA and QDA wikicour senote.com/w/index.php?title= Stat841&pr intable= yes 10/74 10/09/2013 Stat841 - Wiki Cour se Notes Both LDA and QDA require us to estimate parameters. The more estimation we have to do, the less robust our classification algorithm will be. LDA: Since we just need to compare the differences between one given class and remaining requires parameters. Therefore, there are parameters. QDA: For each of the differences, classes, totally, there are requires differences. For each of them, parameters. Therefore, there are parameters. A plot of the number of parameters that must be estimated, in terms of (K-1). The x-axis represents the number of dimensions in the data. As is easy to see, QDA is far less robust than LDA for high-dimensional data sets. related link: LDA:[5] (http://www.stat.psu.edu/~jiali/course/...
View Full Document

## This document was uploaded on 03/07/2014.

Ask a homework question - tutors are online