This preview shows page 1. Sign up to view the full content.
Unformatted text preview: the points of the model down to a single
point of the image. Though there are a number of problems with consistency that need to be
addressed, it will serve as source of intuition as we analyze di erent approaches.
We now have two alternatives for alignment when the imaging function is unknown: a
theoretical technique that may be intractable 4.19, and an outwardly e cient technique
that has a number of important di culties 4.21. One would like to nd a technique that
combines the best feature from each approach. Perhaps the complex search for the most
likely imaging function, Fq , can be replaced with a simpler search for a consistent imaging
function.
One type of function approximator that maximizes consistency is known as a nearest
neighbor function approximator Duda and Hart, 1973. A nearest neighbor function approximator FN u; a is constructed directly from a sample a. The approximator's value at a
point u is the value of the point which is nearest in the sample: FN u; a = vT ^ such that x = argmin ju , uxaj :
x
^ x 2a
a 4.22 FN can be used to estimate the likelihood of a model as we did in 4.19. The nearest neighbor formulation can be much more e cient than a naive implementation of 4.19, since
there is no need to search for Fq. The model, image, and transformation de ne FN directly.
The nearest neighbor function approximator plays a role in the likelihood computation
that is very similar to the role that Parzen density estimation plays in entropy estimation
see Sections 2.4.3 and 3.2. Unlike the Parzen estimate, the nearest neighbor approximation
is not continuously di erentiable. A similar though di erentiable version is called a weighted
neighbor approximator2:
P
u; a = a Ru , uxav T xa :
4.23
F
P Ru , ux
a
a
The weighting function R usually has a maximum at zero, and falls o asymptotically away
2 This technique is also known as kernel regression. 86 4.1. ALIGNMENT AITR 1548 from zero. A common choice for R is the Gaussian density function g with standard deviation
. We can rewrite F as,
X
F u; a = Wau; uxavT xa :
4.24
a where W is the soft nearest neighbor function rst de ned in Section 3.2, Wau1; u2 P g u1 , u2 :
a g u1 , uxa
An estimate for the log likelihood is then,
X
log`T = ,k2 vT xa , F uxa; a 2
4.25
a
2
X"
X
= ,k2 vT xa , Wauxa; uxbvT xb
4.26
a
b
2
X "X
X
= ,k2
Wauxa; uxbvT xa , Wauxa; uxbvT xb 4.27
ab
b
"X
2
X
= ,k2
Wauxa; uxbvT xa , vT xb :
4.28
a b Step 4.27 relies on the fact that
X
b Wauxa; uxb = 1 ; where xa 2 a. The log likelihood of a transformation using a weighted neighbor function
approximation is very similar to the intuitive consistency measure 4.21. In addition its
derivative bears a striking resemblance to the derivative of the EMMA estimate see 3.26: d log`T
4.29
dT d
d
X Pb g uxa , uxb2vT xa , vT xb dT vT xa , dT vT xb
= ,k2
4.30
Pb g uxa , uxb2
a
!
XX
2 v T xa , v T xb d v T xa , v T xb 4.31
= ,k2
Wauxa; uxb
:
dT
ab 87 Paul A. Viola CHAPTER 4. MATCHING AND ALIGNMENT 450 u(x)
v(x) Log Likelihood Intensity 1
0.5
0
0.5 350
250
150 1
50
0 100 200
300
Position 400 150 75 0
Position 75 150 2
1.5
1
0.5
0
0.5
1
1.5
2
2.5
3 u(x)
v(x) 380 Log Likelihood Intensity Figure 4.6: On the left is a plot of image and model that are identical except for noise. On
the right is a plot of the logarithm of weighted neighbor likelihood versus translation. 280
180
80 0 100 200
300
Position 400 150 75 0
Position 75 150 Figure 4.7: On the left is a plot of image and model that are related nonlinearly. On the
right is a plot of the logarithm of weighted neighbor likelihood versus translation.
Weighted neighbor likelihood can be used to evaluate the cost of di erent translations.
Figure 4.6 shows a graph of weighted neighbor likelihood versus translation for the initial
pair of signals, ux and vx = ux + . Figure 4.7 contains a similar graph for the second,
nonlinear experiment, ux and vx = ux , 22 + . Both graphs show a strong minimum
at the correct alignment of the signals. We can conclude that weighted neighbor likelihood
can be used in situations where neither cost nor normalized cost would work.
The parallel between EMMA and weighted neighbor likelihood is more than structural.
EMMA estimates the density of the sample directly and uses it to compute the derivative of
entropy with respect to the parameter vector; weighted neighbor likelihood estimates the imaging function directly and uses it to compute the derivative of log likelihood with respect to the
transformation. More importantly both techniques manipulate the entropy of the of the joint
distribution u and v. EMMA can be used to evaluate the joint entropy hvT x; uX ;
weighted neighbor likelihood evaluates the conditional entropy hvT x j uX under the
88 4.1. ALIGNMENT AITR 1548 assumption that pvT...
View
Full
Document
This note was uploaded on 02/10/2010 for the course TBE 2300 taught by Professor Cudeback during the Spring '10 term at Webber.
 Spring '10
 Cudeback
 The Land

Click to edit the document details