Unformatted text preview: xjuX ; a = g vT x , F ux; a see Sections 2.3.1 and 4.1.2
for commentary on the equivalence of log likelihood and sample entropy.
We can relax the constraint that v be conditionally Gaussian by using EMMA to estimate
the conditional entropy: hvT xjuX hux , hvT x; uX : 4.32 The rst term is the entropy of the model. It is not a function of the transformation.
Why are these two seemingly unrelated concepts, weighted neighbor likelihood and conditional entropy, so closely related? We can gain some intuition about this equivalence by
looking at the joint distribution of u and v. For any particular transformation we can sample
points uxa; vT xa T and plot them. Figure 4.8 shows the joint samples of the signals
from Figure 4.1 when aligned. The thin line in the plot is the weighted neighbor function
approximation of this data; it is a good t to the data. There is a noticeable clumping, or
clustering, in the data. These clumps arise from the regions of almost constant intensity in
the signals. There are four large regions of constant intensity and four clusters.
When these almost identical signals are aligned they are strongly correlated. Large values in one signal corresponds to large values of the other. Conversely, small values in one
correspond to small values in the other. Correlation measures the tendency of the data to lie
along the line x = y normalized correlation measures the tendency of the data to lie along
some line of positive slope. Figure 4.9 shows the joint samples of the two signals shown in
Figure 4.2. These signals are not linearly related or even correlated, but they are functionally
related.
Weighted neighbor likelihood measures the quality of the weighted neighbor function
approximation. In both of these graphs the points of the sample lie near the weighted neighbor
function approximation. Moreover, in both of these graphs the joint distribution of samples
is tightly packed together. Points are not distributed throughout the space, but lie instead
in a small part of the joint space. This is the hallmark of a low entropy distribution.
We can generate similar graphs for signals that are not aligned. Figures 4.10 and 4.11 show
the same signals except for the fact that the image has been shifted 30 units. For these shifted
signals the structure of the joint distribution is destroyed. The weighted neighbor function
89 Paul A. Viola CHAPTER 4. MATCHING AND ALIGNMENT
2.5
2
1.5
1
0.5
0
0.5
1
1.5
2
2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 Figure 4.8: Samples from the joint space of ux and vx = ux + . A small black square
is plotted for every pixel in the signals. The Xaxis is the value of ux. The Yaxis is the
value of vx. The clumping of points in clusters is caused by the regions of almost constant
intensity in the images. The thin line plotted through the data is the weighted neighbor
function estimate.
approximation is a terrible t to the data. As a result the weighted neighbor likelihood of
these signals is low.
Alternatively we could look directly at the distributions. When the signals are aligned the
distributions are compact. When they are misaligned the distributions are spread out and
haphazard. Or in other words, aligned signals have low joint entropy and misaligned signals
have high joint entropy.
This suggests an alternative to weighted neighbor likelihood: the EMMA approximation
of joint entropy. Graphed below are the EMMA estimates of joint entropy, hw, versus
translation for each signal alignment problems discussed. Figure 4.12 shows a graph of joint
entropy for the two signals that are nearly identical. Figure 4.13 shows a graph of joint
entropy for the model and the nonlinearly transformed image. In both case the graphs show
strong minima at the correct aligning translation. 90 4.1. ALIGNMENT AITR 1548 0.5
0
0.5
1
1.5
2
2.5
3
3.5
2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 Figure 4.9: Samples from the joint space of ux and vx = ,ux2 + . 2.5
2
1.5
1
0.5
0
0.5
1
1.5
2
2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 Figure 4.10: Samples from the joint space of ux and vx = ux + 30 + . Unlike the
previous graph these two signal are no longer aligned. 91 Paul A. Viola CHAPTER 4. MATCHING AND ALIGNMENT 2.5
2
1.5
1
0.5
0
0.5
1
1.5
2
3.5 3 2.5 2 1.5 1 0.5 0 0.5 Figure 4.11: Samples from the joint space of ux and vx = ,ux + 302 + . The two
signals are not aligned.
2.2 u(x)
v(x) Joint Entropy Intensity 1
0.5
0
0.5 2
1.8
1.6 1
1.4
0 100 200
300
Position 400 150 75 0
Position 75 150 2
1.5
1
0.5
0
0.5
1
1.5
2
2.5
3 u(x)
v(x) 2.2 Joint Entropy Intensity Figure 4.12: On the left is a plot of image and model that are identical except for noise. On
the right is a plot of the estimated joint entropy versus translation. 2 1.8 0 100 200
300
Position 400 150 75 0
Position 75 150 Figure 4.13: On the left is a plot of image and model that are related nonlinearly. On the
right is a plot of estimated joint entropy versus translation.
92 4.2. WEIGHTED NEIGHBOR LIKELIHOOD VS. EMMA AITR 1548 4.2...
View
Full
Document
This note was uploaded on 02/10/2010 for the course TBE 2300 taught by Professor Cudeback during the Spring '10 term at Webber.
 Spring '10
 Cudeback
 The Land

Click to edit the document details