lec20

lec20 - Lecture 20: November 3, 10 HW5 to be posted today,...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
USC CS574: Computer Vision, Fall 2010 Copyright 2010, by R. Nevatia 1 Lecture 20: November 3, 10 • HW5 to be posted today, due Nov 10 • Office hours (today only): 1:30 to 2:30 PM • Make up class, Friday, Nov 5, 9:30 to 10:50 am, Studio D •R e v i e w – Bayesian classifier, Bayesian risk – Nearest neighbor classifier – Parametric classifiers •T o d a y – More on parametric classifiers – AdaBoost Cascade classifiers
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
USC CS574: Computer Vision, Fall 2010 Copyright 2010, by R. Nevatia 2 Parametric Models for Classifiers • For each class, approximate P(x|class k ) by a parametric function such as a Gaussian distribution. The decision rule can then be expressed in terms of the observed features and the model parameters. • Gaussian density function • A multi-variate Gaussian is defined by a mean vector and a co-variance matrix; see next slide (from p. 500 of FP book)
Background image of page 2
USC CS574: Computer Vision, Fall 2010 Copyright 2010, by R. Nevatia 3 Co-variance Matrix • Let a feature vector x have d components (x 1 , x 2 ..x d ) • Let X be set of n feature vectors { x i } • Let μ = average of components in X • Let x i ' = x i μ • Let Σ be co-variance matrix (d x d) – Defined by (1/(n-1))* Σ ( x i ' x i ' T )
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
USC CS574: Computer Vision, Fall 2010 Copyright 2010, by R. Nevatia 4 Multi-dimensional Gaussian Distribution
Background image of page 4
USC CS574: Computer Vision, Fall 2010 Copyright 2010, by R. Nevatia 5 Parametric Classifiers • Compute distance of each new observation, x, from the mean of each class. • Distance measure can be weighted by variance (Mahalanobis distance) • Classify based on distance and prior probabilities (algorithm 22.2 in FP book)
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
USC CS574: Computer Vision, Fall 2010 Copyright 2010, by R. Nevatia 6 Approximating the Distribution Function • It is difficult to estimate the joint distribution function when
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 11/23/2010 for the course CS 574 taught by Professor Ramnevatia during the Fall '10 term at USC.

Page1 / 21

lec20 - Lecture 20: November 3, 10 HW5 to be posted today,...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online