SPR_LectureHandouts_Chapter_03_Part2

SPR_LectureHandouts_Chapter_03_Part2 - Pattern Recognition...

Info icon This preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
1 Electrical and Computer Engineering Department Saurabh Prasad Pattern Recognition Chapter 3 Pattern Recognition ECE-8443 Chapter 3, Part 2 Bayesian Parameter Estimation Electrical and Computer Engineering Department, Mississippi State University.
Image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
2 Electrical and Computer Engineering Department Saurabh Prasad Pattern Recognition Chapter 3 Outline Bayesian Estimation (BE) Bayesian Parameter Estimation: Gaussian Case Bayesian Parameter Estimation: General Estimation Problems of Dimensionality
Image of page 2
3 Electrical and Computer Engineering Department Saurabh Prasad Pattern Recognition Chapter 3 In Chapter 2, we learned how to design an optimal classifier if we knew the prior probabilities, P( ω i ), and class-conditional densities, p(x| ω i ). Bayes: treat the parameter(s) as random variables (or vectors) having some known prior distribution. Observations of samples converts this to a posterior. Bayesian learning: sharpen the a posteriori density causing it to peak near the true value. Supervised vs. unsupervised: do we know the class assignments of the training data. Bayesian estimation and ML estimation produce very similar results in many cases. Reduces statistical inference (prior knowledge or beliefs about the world) to probabilities. Introduction to Bayesian Parameter Estimation
Image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
4 Electrical and Computer Engineering Department Saurabh Prasad Pattern Recognition Chapter 3 Bayes formula allows us to compute posteriors P( ω i |x) from the priors, P( ω i ), and the likelihood, p(x| ω i ) Posterior probabilities, P( ω i |x), are central to Bayesian classification. But what If the priors and class-conditional densities are unknown? The answer is that we can compute the posterior, P( ω i | x ), using all of the information at our disposal (e.g., training data). Bayesian estimation
Image of page 4