This preview shows pages 1–12. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ECEN 689 Statistical Computation in GSP http://www.ece.tamu.edu/~ulisses/ECEN689/ Lecture 5: Review of Pattern Recognition Ulisses Braga Neto Genomic Signal Processing Laboratory Department of Electrical and Computer Engineering Texas A&M University Multivariate Classification Classification of expression profiles (vectors). Breast Cancer Data Classifier Design Predictors (genes, epitopes) Target (disease, immunization, survivability) Probabilistic Relationship Classification Error: h (classifier) Error Estimation/Feature Selection Breast Cancer Data What predictors should I use and what is an estimate of the classification error based on these data? Basic Pipeline Optimal Classifier Every problem has an optimal classifier, called the Bayes classifer . The corresponding classification error is called the Bayes error and is usually nonzero. To find the Bayes classifier and error one needs to know the joint distribution F XY This distribution is usually unknown or only known partially, so one must resort to design suboptimal classifiers based on training data. Gaussian Case EqualVariance Case If we can assume that the covariance matrices are equal to each other then the optimal classifier is linear : where Example Gene 1 Gene 2 1 ( 0 + 1 ) 1 2 90 Example  II From R. Duda, P. Hart and D. Stork, Pattern Classification, 2nd ed., John Wiley & Sons, 2001. Optimal Classification Error The optimal classifier in the Gaussian equal variance case is a hyperplane, and its error can be shown to be given by where is the cdf of a standard Gaussian and is the Mahalanobis distance between classes: Linear Discriminant Analysis...
View
Full
Document
This note was uploaded on 02/08/2010 for the course ECEN 689601 taught by Professor Staff during the Spring '10 term at Texas A&M.
 Spring '10
 Staff
 Signal Processing

Click to edit the document details