lec6 Parameter Estimation

lec6 Parameter Estimation - Announcements Readings on...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
1 CSE190a Fal 06 Parameter Estimation Biometrics CSE 190-a Lecture 6 CSE190a Fal 06 Announcements • Readings on E-reserves • Project proposal due today Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley & Sons, 2000 with the permission of the authors and the publisher Chapter 3: Maximum-Likelihood & Bayesian Parameter Estimation (part 1) z Introduction z Maximum-Likelihood Estimation z Example of a Specific Case z The Gaussian Case: unknown μ and σ z Bias z Pattern Classification , Chapter 3 4 z Introduction z Data availability in a Bayesian framework z We could design an optimal classifier if we knew: z P( ω i ) (priors) z P(x | ω i ) (class-conditional densities) Unfortunately, we rarely have this complete information! z Design a classifier from a training sample z No problem with prior estimation z Samples are often too small for class-conditional estimation (large dimension of feature space!) 1 Pattern Classification , Chapter 3 5 z A priori information about the problem z Normality of P(x | ω i ) P(x | ω i ) ~ N( μ i , Σ i ) z Characterized by 2 parameters z Estimation techniques z Maximum-Likelihood (ML) and the Bayesian estimations z Results are nearly identical, but the approaches are different 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 Pattern Classification , Chapter 3 6 z Parameters in ML estimation are fixed but unknown! z Best parameters are obtained by maximizing the probability of obtaining the samples observed z Bayesian methods view the parameters as random variables having some known distribution z In either approach, we use P( ω i | x) for our classification rule! 1 Pattern Classification , Chapter 3 7 z Maximum-Likelihood Estimation z Has good convergence properties as the sample size increases z Simpler than any other alternative techniques z General principle z Assume we have c classes and P(x | ω j ) ~ N( μ j , Σ j ) P(x | ω j ) P (x | ω j , θ j ) where: )...) x , x cov( , , ,..., , ( ) , ( n j m j 22 j 11 j 2 j 1 j j j σ σ μ μ = Σ μ = θ 2 Pattern Classification , Chapter 3 8 z Use the information provided by the training samples to estimate θ = ( θ 1 , θ 2 , …, θ c ), each θ i (i = 1, 2, …, c) is associated
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 02/14/2008 for the course CSE 190A taught by Professor Kriegman during the Fall '06 term at UCSD.

Page1 / 6

lec6 Parameter Estimation - Announcements Readings on...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online