Ch4-Pattern_Classification-OLD2

Ch4-Pattern_Classification-OLD2 - Click to edit Master...

Info iconThis preview shows pages 1–11. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Click to edit Master subtitle style 6/4/09 Veton Kpuska Speech Recognition Pattern Classification 6/4/09 Veton Kpuska 6/4/09 Veton Kpuska 22 Pattern Classification u Introduction u Parametric classifiers u Semi-parametric classifiers u Dimensionality reduction u Significance testing 6/4/09 Veton Kpuska 6/4/09 Veton Kpuska 33 Pattern Classification u Goal: To classify objects (or patterns) into categories (or classes) u Types of Problems: 1. Supervised : Classes are known beforehand, and data samples of Feature Extraction Classifier Class ei Feature Vectors x Observatio n s 6/4/09 Veton Kpuska 6/4/09 Veton Kpuska 44 Probability Basics u Discrete probability mass function (PMF): P ( i ) u Continuous probability density function (PDF): p(x) u Expected value: E(x) = i i P 1 ) ( = 1 ) ( dx x p = dx x xp x E ) ( ) ( 6/4/09 Veton Kpuska 6/4/09 Veton Kpuska 55 Kullback-Liebler Distance u Can be used to compute a distance between two probability mass distributions, P ( zi ), and Q ( zi) u Makes use of inequality log x x - 1 u Known as relative entropy in information theory ( 29 ( 29 ( 29 ( 29 log || = i i i i z Q z P z P Q P D ( 29 ( 29 ( 29 ( 29 ( 29 ( 29 ( 29 ( 29 ( 29 =- = - i i i i i i i i i i i z P z Q z Q z P z P z Q z P z P 1 log ( 29 ( 29 P Q D Q P D || || + 6/4/09 Veton Kpuska 6/4/09 Veton Kpuska 66 Bayes Theorem u Define: { i} a set of M mutually exclusive classes P(— i) a priori probability for class i p( x | i) PDF for feature vector x in class À i P(— i| x ) A posteriori probability of ‡ i given x 6/4/09 Veton Kpuska 6/4/09 Veton Kpuska 77 Bayes Theorem From Bayes Rule: Where: ) ( ) ( ) | ( ) | ( x p P x p x P i i i = = = M i i i P x p x p 1 ) ( ) | ( ) ( 6/4/09 Veton Kpuska 6/4/09 Veton Kpuska 88 Bayes Decision Theory u The probability of making an error given x is: P(error| x )=1-P(  i| x ) if decide class  i u To minimize P ( error | x ) (and P ( error )): Choose i if P( i| x )>P( j| x ) j i 6/4/09 Veton Kpuska 6/4/09 Veton Kpuska 99 Bayes Decision Theory u For a two class problem this decision rule means: Choose 1 if else  2 u This rule can be expressed as a likelihood ratio: ) ( ) ( ) | ( ) ( ) ( ) | ( 2 2 1 1 x p P x p x p P x p ) ( ) ( ) | ( ) | ( 1 2 2 1 P P x p x p 6/4/09 Veton Kpuska 6/4/09 Veton Kpuska 1010 Bayes Risk u Define cost function ij and conditional risk R ( i | x ): n ij is cost of classifying x as i when it is really j n R ( i | x ) is the risk for classifying...
View Full Document

Page1 / 42

Ch4-Pattern_Classification-OLD2 - Click to edit Master...

This preview shows document pages 1 - 11. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online