Unformatted text preview: e higher entropy.
Can the mutual information between the model and the entire image be measured? This
would be a more direct parallel to the MDL framework. MDL directs the sender to transmit
the model and pose that will produce an encoding for the entire image that is shortest. As
the model explains more, less of the image needs to be encoded directly. We can derive an
approximation to mutual information between the model and the entire image. Let U and V
represent the whole model and whole image respectively. The mutual information equation
can now be rewritten,
I U; V = hU + hV , hU; V :
4.35
The dependence of hU; V on T is implicit in the above equation.
Logically, the image can be split into two parts: Vm and Vm . Vm is the part of the image
in which the model lies. Vm is the unmodeled part of the image. We can de ne them as
Vm fvy such that uT ,1y is de nedg ;
and, 4.36 Vm V , Vm :
4.37 It is assumed that the two parts of the image are independent. This allows us to split the
entropies that involve V into two distinct parts, I U; V = hU + hV , hU; V
= hU + hVm ; Vm , hU; Vm ; Vm
= hU + hVm + hVm , hU; Vm , hVm
= hU + hVm , hU; Vm : 4.38
4.39
4.40
4.41 Step 4.40 relies on the assumption that Vm is independent of both Vm and U i.e. that the
background is independent both of the image of the object and the object. Equation 4.41
directs us to maximize the entropy of the modeled part of the image, hVm , as well as
minimizing the joint entropy of model and the image. 103 Paul A. Viola CHAPTER 4. MATCHING AND ALIGNMENT EMMA can estimate the entropy of a signal, but what is the entropy of an entire image?
This is a very di cult question to answer. The entropy of an image is a function of the
distribution of a vector random variable with between ten thousand and a million dimensions.
Though the entropy of an image can be modeled as the the sum of the pixel entropies, this
is guaranteed to be an overestimate of the true image entropy.
One of the problems with mutual information alignment is that it does not tell us whether
the object is present in the image or not. The principle of minimum description length should
allow us to derive a decision threshold for the presence of an object. The object is present
if the image is easier to encode with it than without it. Unfortunately this decision is highly
dependent on the estimate of the entropy, or code length, of the explained part of the image.
The naive overestimate of image entropythat simply sums the entropies of the pixelsis
not tight enough to determine the decision threshold correctly. An important area of open
research is deriving a more reasonable estimate of the code length of an image.
In previous derivations points were sampled from the model and projected into the image.
Since we are now explicitly modeling the entropy of the image, we will sample from the image
and project back into the model. The joint entropy is then, hvy; uT ,1y = EY log pvy; uT ,1y
Z
= pvy; uT ,1ylog pvy; uT ,1ydy
Z
= pvT x; uxlog pvT x; uxAT; xdx
= EX AT; xlog pvT x; ux : 4.42
4.43
4.44
4.45 Equation 4.44 involves a change of variables from y back into x. The correcting term AT; x
plays the role of the Jacobian of the transformation, measuring the ratio of the area in the
model as it projects into the image. For a ne transformations the projected area is related
to the model area by the determinant of the transformation. For projective transformations
there will also be a term that arises from foreshortening. The mutual information is then
2
0
13
+logP uxb; a
6
B
C7
C7 :
I U; V Eb 6AT; xb B +logP vT xb; a
4.46
4
@
A5
fv T xb; uxb g; a
,logP
The mutual information between the whole model and image is the EMMA estimate of the
104 4.5. SUMMARY AITR 1548 mutual information between the model and image signals weighted by the projected area of
the model. This new formulation of mutual information includes a term which encourages
the model to explain as much of the image as possible. The derivative
2
0
13
+logP uxb; a
6
C7
dB
6 AT; xb dT B +logP vT xb; a
C7
6
@
A7
6
7
fv T xb; uxbg; a
6
7
,logP
d I U; V E 6 0
7;
1
6
7
4.47
b6
7
dT
+logP uxb; a
6B
7
Cd
6 +B
C dT AT; xb 7
6 @ +logP vT xb; a
7
A
4
5
,logP fvT xb; uxbg; a
encourages the model to grow as large as possible. 4.5 Summary
In this chapter several motivations for the selection of mutual information as a measure of
alignment have been presented. The primary existing measure of alignment, correlation, is
rederived as a maximum likelihood technique which has a number of important weaknesses.
The concept of maximum likelihood matching is then extended to de ne a more general
measure of alignment. This measure, called weighted neighbor likelihood, can be interpreted
a...
View
Full
Document
This note was uploaded on 02/10/2010 for the course TBE 2300 taught by Professor Cudeback during the Spring '10 term at Webber.
 Spring '10
 Cudeback
 The Land

Click to edit the document details