L5_PR - ECEN 689 Statistical Computation in GSP

Info iconThis preview shows pages 1–12. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ECEN 689 Statistical Computation in GSP http://www.ece.tamu.edu/~ulisses/ECEN689/ Lecture 5: Review of Pattern Recognition Ulisses Braga Neto Genomic Signal Processing Laboratory Department of Electrical and Computer Engineering Texas A&M University Multivariate Classification Classification of expression profiles (vectors). Breast Cancer Data Classifier Design Predictors (genes, epitopes) Target (disease, immunization, survivability) Probabilistic Relationship Classification Error: h (classifier) Error Estimation/Feature Selection Breast Cancer Data What predictors should I use and what is an estimate of the classification error based on these data? Basic Pipeline Optimal Classifier Every problem has an optimal classifier, called the Bayes classifer . The corresponding classification error is called the Bayes error and is usually nonzero. To find the Bayes classifier and error one needs to know the joint distribution F XY This distribution is usually unknown or only known partially, so one must resort to design sub-optimal classifiers based on training data. Gaussian Case Equal-Variance Case If we can assume that the covariance matrices are equal to each other then the optimal classifier is linear : where Example Gene 1 Gene 2 1 ( 0 + 1 ) 1 2 90 Example - II From R. Duda, P. Hart and D. Stork, Pattern Classification, 2nd ed., John Wiley & Sons, 2001. Optimal Classification Error The optimal classifier in the Gaussian equal- variance case is a hyperplane, and its error can be shown to be given by where is the cdf of a standard Gaussian and is the Mahalanobis distance between classes: Linear Discriminant Analysis...
View Full Document

This note was uploaded on 02/08/2010 for the course ECEN 689-601 taught by Professor Staff during the Spring '10 term at Texas A&M.

Page1 / 31

L5_PR - ECEN 689 Statistical Computation in GSP

This preview shows document pages 1 - 12. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online