The larger the probability of error the more similar

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: of error, the more similar the distributions. Therefore, (contrary to the hypothesis testing), we formulate the target location estimation problem as the derivation of the estimate that maximizes the Bayes error associated with the model and candidate distributions. For the moment, we assume that the target has equal prior probability to be present at any location y in the neighborhood of the previously estimated location. An entity closely related to the Bayes error is the Bhattacharyya coe cient, whose general form is dened by 19] Z p (y ) p(y) q] = pz (y)qz dz : (16) Properties of the Bhattacharyya coe cient such as its relation to the Fisher measure of information, quality of the sample estimate, and explicit forms for various distributions are given in 11, 19]. Our interest in expression (16) is, however, motivated by its near optimality given by the relationship to the Bayes error. Indeed, let us denote by and two sets of parameters for the distributions p and q and by = ( p q ) a set of prior probabilities. If the value of (16) is smaller for the set than for the set , it (10) ! n C X g x ; xi 2 (11) nhd i=1 h computed with kernel G. Using now (10) and (11), (9) f^G(x) becomes =C ^ rfK (x) = f^G (x) 2h2 Mh G(x) (12) 2 ^f Mh G(x) = 2h r^ K (x) : =C fG(x) (13) from where it follows that Expression (13) shows that the sample mean shift vector obtained with kernel G is an estimate of the normalized density gradient obtained with kernel K . This is a more general formulation of the property rst remarked by Fukunaga 15, p. 535]. 2.2 A Su cient Convergence Condition The mean shift procedure is de ned recursively by computing the mean shift vector Mh G(x) and translating the center of kernel G by Mh G(x). Let us denote by yj j=1 2::: the sequence of successive locations of the kernel G, where yj+1 = Pn i=1 xi g yj ;xi h 2 j = 1 2 : : : (14) yj ;xi 2 i=1 g h is the weighted mean at yj computed with kernel G and y1 is the center of the initial kernel. The density Pn 2 4 Tracking Algorithm can be p...
View Full Document

This note was uploaded on 11/27/2011 for the course MATH 3484 taught by Professor Staff during the Fall '10 term at University of Central Florida.

Ask a homework question - tutors are online