ImRegSlides2

ImRegSlides2 - Fundamentals of Image Registration and...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Fundamentals of Image Registration and Mosaicking M. Zuliani1 Vision Research Lab Department of Electrical and Computer Engineering University of California, Santa Barbara 1 [email protected] October 30, 2007 M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 1 / 54 Outline 1 2 3 Fundamentals for Image Registration A Qualitative Definition Conventions Image Derivatives Image Interpolation Formal Definition Image Registration Systems Building Blocks Global Mappings Digression: Mutual Information Registration Point Feature Detection Introduction The Gradient Normal Matrix Condition Theory Primer Two Ways to Look at the Problem Corner Detectors M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 2 / 54 Fundamentals for Image Registration Outline 1 2 3 Fundamentals for Image Registration A Qualitative Definition Conventions Image Derivatives Image Interpolation Formal Definition Image Registration Systems Building Blocks Global Mappings Digression: Mutual Information Registration Point Feature Detection Introduction The Gradient Normal Matrix Condition Theory Primer Two Ways to Look at the Problem Corner Detectors M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 3 / 54 Fundamentals for Image Registration A Qualitative Definition A Qualitative Definition Image registration: establish a mapping between two or more images possibly taken: at different times, from different viewpoints, under different lighting conditions, and/or by different sensors align the images with respect to a common coordinate system coherently with the three dimensional structure of the scene Image mosaicking: images are combined to provide a representation of the scene that is both geometrically and photometrically consistent. M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 4 / 54 Fundamentals for Image Registration Conventions The Image Lattice 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 5 / 54 Fundamentals for Image Registration Image Derivatives Finite Differences Derivatives On a continuous domain: On a discrete lattice: def f (x+h)-f (x) df dx (x) = limh0 h def I(x i+1,j )-I(x i-1,j ) Ix (x i,j ) = 2h M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 6 / 54 Fundamentals for Image Registration Image Derivatives Smoothing Before Deriving Prewitt operator: -1 0 Px1 = 1 -1 -1 -1 1 0 0 0 = 3 1 1 1 1 4 1 2 1 4 1 3 1 3 1 3 average smoothing first central difference Sobel operator: changing the smoothing kernel to Sx1 -1 -2 -1 1 0 0 = 0 4 1 2 1 : Transpose the kernels to derive along x2 M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 7 / 54 Fundamentals for Image Registration Image Derivatives How Much Smoothing? The Issue of Scale. As noted by Lindeberg: [...] objects in the world may appear in different ways depending upon the scale of observation. Thus we need different tools to describe them: quantum mechanics particle physics thermodynamics classical mechanics general relativity Similarly with images: construct derivative operators that depend continuously on a smoothing parameter must be capable of capturing signal variations at different scales M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 8 / 54 Fundamentals for Image Registration Image Derivatives Scale Space Signal Representation From image I to its scale space representation L = {L (x)} Recipe: convolve the original image with a Gaussian kernel: L (x) = (I G )(x) G (x) = Physical intuition: solution to the heat diffusion equation: Lt 1 2 (x) = L (x) t 2 x t L0 (x) = I(x) for t = 2 . - 1 e 2 2 2 1 x 2 2 def M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 9 / 54 Fundamentals for Image Registration Image Derivatives The Scale Space Representation of a Cameraman =0 =1 = 1. 9 = 2. 8 = 3. 7 = 4. 6 = 5. 5 = 6. 4 = 7. 3 = 8. 2 = 9. 1 = 10 M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 10 / 54 Fundamentals for Image Registration Image Derivatives Why Gaussian Kernels? Let's define: Shift: T I(x) = I(x - ) def Rotation: R I(x) = I(R()x), where R() is the 2 2 matrix that rotates a vector of an angle . def Scaling: S I(x) = I(x). def and the functional: T2 : I R2 R (I, x) T2 [I](x) M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 11 / 54 Fundamentals for Image Registration Image Derivatives Because They Satisfy the Scale Space Axioms! Linearity: T2 [1 I1 + 1 I2 ](x) = 1 T2 [I1 ](x) + 2 T2 [I2 ](x) Scale invariance: There must exist a a strictly increasing continuous function such that (0) = 0 and lims (s) = so that S T2 [I] = T(2 ) [S I] Rotation invariance: R T2 [I] = T2 [R I]. 1 2 1 Shift invariance: T T2 [I] = T2 [T I] Semi-group structure: T2 [T2 [I]] = T2 +2 [I] 2 Causality constraints: The causality constraints can be divided in: Weak Causality Constraint: Any scale space isophote L (x) = is connected to a point L0 (x) = I(x) = . Strong Causality Constraint: For every choice of 2 > 1 0 the intersection of an isophote within the domain (x, ) R2 R+ : x R2 , (1 , 2 ] with the plane = 1 should not be empty. M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 12 / 54 Fundamentals for Image Registration Image Derivatives The Scale Space Representation of a 1D Signal 250 9.25 I sophote at = 60 250 8.25 7.25 6.25 5.25 4.25 2 3.25 1 2.25 1.25 0.25 100 50 50 100 150 200 250 200 150 200 150 f 100 50 50 100 150 200 250 x x M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 13 / 54 Fundamentals for Image Registration Image Derivatives Scale Space Differentiation Filters Fact: L (x) = (I G )(x) = xi xi L = Magnitude of the gradient: L = L x1 2 L x1 L x2 I G xi (x) Image gradient: + L x2 2 M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 14 / 54 Fundamentals for Image Registration Image Derivatives Scale Space Gradient Magnitude of a Cameraman =0 =1 = 1. 9 = 2. 8 = 3. 7 = 4. 6 = 5. 5 = 6. 4 = 7. 3 = 8. 2 = 9. 1 = 10 M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 15 / 54 Fundamentals for Image Registration Image Interpolation Why do we Need Interpolation? Because we may want to recover the intensity value at non integer pixel locations. Interpolation methods: Nearest neighbour Bilinear Cubic Lanczos ... M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 16 / 54 Fundamentals for Image Registration Image Interpolation Nearest Neighbor Interpolation What is the value of f at p q T ? ^(p, q) = f (round(p), round(q)) f M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 17 / 54 Fundamentals for Image Registration Image Interpolation Bilinear Interpolation: Notation What is the value of f at p q T ? F0,0 = f (x, y ) F1,0 = f (x + 1, y ) F0,1 = f (x, y + 1) F1,1 = f (x + 1, y + 1) x = p - x and y = q - y def def def def def def M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 18 / 54 Fundamentals for Image Registration Image Interpolation Bilinear Interpolation What is the value of f at p q T ? Linear interpolation in the x direction: fy (x) = (1 - x)F0,0 + xF1,0 fy +1 (x) = (1 - x)F0,1 + xF1,1 Linear interpolation in the y direction: ^(p, q) = (1 - y )fy + yfy +1 f ^(p, q) = (1 - y )(1 - x)F0,0 + (1 - y )xF1,0 + y (1 - x)F0,1 + y xF1,1 f M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 19 / 54 Fundamentals for Image Registration Formal Definition Some Definitions Definition Two points x and x correspond between the reference and the sensed image: x x if they are the projection of the same point in the scene onto the camera image plane. Definition A mapping T is a function: T (x) : R2 R2 (x; ) T (x) where is the vector of parameters of the transformation and x is the point to be mapped. M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 20 / 54 Fundamentals for Image Registration Formal Definition More Definitions Definition The overlapping area O in the reference image, according to the transformation T , is the set of points: O = {x D : T (x) D } Definition The overlapping area O in the sensed image, according to the transformation T , is the set of points: O = {x D : x D such that x = T (x)} def def M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 21 / 54 Fundamentals for Image Registration Formal Definition Image Registration: a Formal Definition Definition (Registered Image Pair) ^ An image pair (I, I ) is registered if there exists a parameter vector such that x O the points x and x = T (x) correspond, ^ i.e. x T (x). ^ M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 22 / 54 Fundamentals for Image Registration Formal Definition Image Registration: Overlapping Overlapping area is displayed in green (image courtesy: prof. Chuck Stewart, RPI registration dataset). M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 23 / 54 Fundamentals for Image Registration Formal Definition Image Registration: Alignment M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 24 / 54 Image Registration Systems Outline 1 2 3 Fundamentals for Image Registration A Qualitative Definition Conventions Image Derivatives Image Interpolation Formal Definition Image Registration Systems Building Blocks Global Mappings Digression: Mutual Information Registration Point Feature Detection Introduction The Gradient Normal Matrix Condition Theory Primer Two Ways to Look at the Problem Corner Detectors M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 25 / 54 Image Registration Systems Building Blocks A Feature Based Registration System >-0%t ?%t0%t !e#t%re'()tr#*ti,.h#0ter'123 ><#8e'!%5i,.h#0ter'9 !e#t%re'4e5*ri0ti,.h#0ter'6 !e#t%re'7#t*hi-8 .h#0ter'9 7,:e;'(5ti<#ti,.h#0ter'=29 Overview of the registration system modules (image courtesy of J. Nieuwenhuijse, copyright by New House Internet Services BV). M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 26 / 54 Image Registration Systems Global Mappings Translation Every point in the image is translated of the same amount T (y) = y + = 1 2 T The parameter vector contains the displacements in the y1 and y2 directions. R2 M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 27 / 54 Image Registration Systems Global Mappings Rotation, Scale and Translation (RST) Every point in the image is is subject to a rotation, to a scaling and to a translation The anchor point x specifies the point about which the coordinate system rotates and translates T ,x (y) = x + 3 -4 4 3 sR (y - x) + 1 2 t = 1 2 3 4 T The components 3 , 4 describe the rotation and the scaling and 1 and 2 encode the translation R4 M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 28 / 54 Image Registration Systems Global Mappings Affine Every point in the image undergoes an affine transformation x is the anchor point T ,x (y) = x + 3 5 4 6 A (y - x) + 1 2 t = 1 . . . 6 T R6 M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 29 / 54 Image Registration Systems Global Mappings Homography - I Describes how a planar surface transforms when imaged through pin-hole cameras that have a different position and orientation in space. An homography is a linear transformation in the projective space P2 . From Euclidean space to projective space: x1 x1 R2 x2 P2 x2 From projective space to Euclidean space p1 p1 p2 P2 p3 R2 p2 p3 p3 M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 30 / 54 Image Registration Systems Global Mappings Homography - II Two points p and p in the projective space are related according to a (planar) homography if: 1 4 7 p 2 5 8 p 3 6 9 H In the Euclidean space an homography is represented via the non linear relation: T (y) = 1 y1 +4 y2 +7 3 y1 +6 y2 +9 2 y1 +5 y2 +8 3 y1 +6 y2 +9 To fix the 9th degree of freedom of the parameter vector R9 set its norm to 1: = 1. M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 31 / 54 Image Registration Systems Digression: Mutual Information Registration Preliminaries - I An example of an area based method Intuition: register in order to maximize the statistical knowledge regarding image I given image I' Definition (Mutual Information) The mutual information I(x; y ) for the random variables x and y is : I(x; y ) = H(x) - H(x|y ) = H(y ) - H(y |x) def M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 32 / 54 Image Registration Systems Digression: Mutual Information Registration Preliminaries - II Definition The entropy H of a (discrete) random variable x that takes values over the alphabet X is: H(x) = - def p(x) log2 p(x) xX Definition The conditional entropy H(x|y ) is: H(x|y ) = - def p(x, y ) log2 p(x|y ) xX y Y M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 33 / 54 Image Registration Systems Digression: Mutual Information Registration Formalization T (x) is the transformation that establishes the mapping between the two images Goal: to determine the parameter such that I(x) = I (T (x)) for every x Solution: maximize the mutual information: = argmax I(I; I ) Rp Simpler to say than to realize. . . M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 34 / 54 Point Feature Detection Outline 1 2 3 Fundamentals for Image Registration A Qualitative Definition Conventions Image Derivatives Image Interpolation Formal Definition Image Registration Systems Building Blocks Global Mappings Digression: Mutual Information Registration Point Feature Detection Introduction The Gradient Normal Matrix Condition Theory Primer Two Ways to Look at the Problem Corner Detectors M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 35 / 54 Point Feature Detection The Gradient Normal Matrix Preliminaries I(x) is the intensity of a single channel image at point T x = x1 x2 is a neighborhood about the point of interest x The gradient matrix is defined as: Ix1 (y 1 ) Ix2 (y 1 ) def . . . . A((x)) = = . . Ix1 (y N ) Ix2 (y N ) The gradient normal matrix is defined as: AT A = def N i=1 Ix1 (y i )Ix2 (y i ) Image Registration . . . x I(y N ) x I(y 1 ) N 2 i=1 Ix1 (y i ) N i=1 Ix1 (y i )Ix2 (y i ) N 2 i=1 Ix2 (y i ) October 30, 2007 36 / 54 M. Zuliani (Vision Research Lab) Point Feature Detection The Gradient Normal Matrix Preliminaries I(x) is the intensity of a single channel image at point T x = x1 x2 is a neighborhood about the point of interest x The gradient matrix is defined as: Ix1 (y 1 ) Ix2 (y 1 ) def . . . . A((x)) = = . . Ix1 (y N ) Ix2 (y N ) The gradient normal matrix is defined as: AT A = def N i=1 Ix1 (y i )Ix2 (y i ) Image Registration . . . x I(y N ) x I(y 1 ) N 2 i=1 Ix1 (y i ) N i=1 Ix1 (y i )Ix2 (y i ) N 2 i=1 Ix2 (y i ) October 30, 2007 36 / 54 M. Zuliani (Vision Research Lab) Point Feature Detection The Gradient Normal Matrix Preliminaries I(x) is the intensity of a single channel image at point T x = x1 x2 is a neighborhood about the point of interest x The gradient matrix is defined as: Ix1 (y 1 ) Ix2 (y 1 ) def . . . . A((x)) = = . . Ix1 (y N ) Ix2 (y N ) The gradient normal matrix is defined as: AT A = def N i=1 Ix1 (y i )Ix2 (y i ) Image Registration . . . x I(y N ) x I(y 1 ) N 2 i=1 Ix1 (y i ) N i=1 Ix1 (y i )Ix2 (y i ) N 2 i=1 Ix2 (y i ) October 30, 2007 36 / 54 M. Zuliani (Vision Research Lab) Point Feature Detection The Gradient Normal Matrix Preliminaries I(x) is the intensity of a single channel image at point T x = x1 x2 is a neighborhood about the point of interest x The gradient matrix is defined as: Ix1 (y 1 ) Ix2 (y 1 ) def . . . . A((x)) = = . . Ix1 (y N ) Ix2 (y N ) The gradient normal matrix is defined as: AT A = def N i=1 Ix1 (y i )Ix2 (y i ) Image Registration . . . x I(y N ) x I(y 1 ) N 2 i=1 Ix1 (y i ) N i=1 Ix1 (y i )Ix2 (y i ) N 2 i=1 Ix2 (y i ) October 30, 2007 36 / 54 M. Zuliani (Vision Research Lab) Point Feature Detection Condition Theory Primer Shaking Things 1.0000 2.0000 2.0000 4.0001 Solve Ax = b. Easy? Not really: Consider: A = x = A-1 b = 10000 and b = 10 + . 20 4.0001 -2.0000 -2.0000 1.0000 10000 10 + 20 = 0.0010 + 4.0001 -2.0000 If = 0 then x = 10 0 410.0100 -200.0000 Image Registration October 30, 2007 37 / 54 If = 0.01 then x = M. Zuliani (Vision Research Lab) Point Feature Detection Condition Theory Primer Shaking Things Consider: A = 1.0000 2.0000 2.0000 4.0001 Solve Ax = b. Easy? Not really: x = A-1 b = 10000 and b = 10 + . 20 4.0001 -2.0000 -2.0000 1.0000 10000 10 + 20 = 0.0010 + 4.0001 -2.0000 If = 0 then x = 10 0 410.0100 -200.0000 Image Registration October 30, 2007 37 / 54 If = 0.01 then x = M. Zuliani (Vision Research Lab) Point Feature Detection Condition Theory Primer Shaking Things Consider: A = 1.0000 2.0000 2.0000 4.0001 Solve Ax = b. Easy? Not really: x = A-1 b = 10000 and b = 10 + . 20 4.0001 -2.0000 -2.0000 1.0000 10000 10 + 20 = 0.0010 + 4.0001 -2.0000 If = 0 then x = 10 0 410.0100 -200.0000 Image Registration October 30, 2007 37 / 54 If = 0.01 then x = M. Zuliani (Vision Research Lab) Point Feature Detection Condition Theory Primer Shaking Things Consider: A = 1.0000 2.0000 2.0000 4.0001 Solve Ax = b. Easy? Not really: x = A-1 b = 10000 and b = 10 + . 20 4.0001 -2.0000 -2.0000 1.0000 10000 10 + 20 = 0.0010 + 4.0001 -2.0000 If = 0 then x = 10 0 410.0100 -200.0000 Image Registration October 30, 2007 37 / 54 If = 0.01 then x = M. Zuliani (Vision Research Lab) Point Feature Detection Condition Theory Primer Differential Condition Number The solution of a system of equations is a mapping from the input data b Rn to the solution or output x = x(b) Rm If a small change in b produces a large change in x(b) then x is ill-conditioned at b Definition The local or differential condition number is: K = K (x, b) = lim sup Theorem For a linear system of equations Ax = b we have K = K (x, b) = A def 0 b x(b + b) - x(b) b M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 38 / 54 Point Feature Detection Condition Theory Primer Differential Condition Number Measuring Shaking In the previous example A = The Frobenius norm of A-1 is: |A-1 |2 = ij (A-1 )2 5 104 1.0000 2.0000 2.0000 4.0001 i,j Big if compared to the entries and to the size of A M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 39 / 54 Point Feature Detection Two Ways to Look at the Problem Short Baseline Correspondences, a.k.a. Optical Flow I = I(, t) is a single channel image sequence parameterized in the time variable t A point of interest has time dependent coordinates x = x(t) The optical flow problem is to discover the time evolution of x Assumption: constant intensity: I(x(t), t) = I(x(t) + dx, t + dt) = c Taylor expansions (neglecting higher order terms) yields: Ix1 (x, t) dx1 + Ix2 (x, t) dx2 + It (x, t) dt = 0 In matrix form: Ix1 (x, t) Ix2 (x, t) dx = -It (x, t) dt October 30, 2007 40 / 54 M. Zuliani (Vision Research Lab) Image Registration Point Feature Detection Two Ways to Look at the Problem Short Baseline Correspondences, a.k.a. Optical Flow I = I(, t) is a single channel image sequence parameterized in the time variable t A point of interest has time dependent coordinates x = x(t) The optical flow problem is to discover the time evolution of x Assumption: constant intensity: I(x(t), t) = I(x(t) + dx, t + dt) = c Taylor expansions (neglecting higher order terms) yields: Ix1 (x, t) dx1 + Ix2 (x, t) dx2 + It (x, t) dt = 0 In matrix form: Ix1 (x, t) Ix2 (x, t) dx = -It (x, t) dt October 30, 2007 40 / 54 M. Zuliani (Vision Research Lab) Image Registration Point Feature Detection Two Ways to Look at the Problem Short Baseline Correspondences, a.k.a. Optical Flow I = I(, t) is a single channel image sequence parameterized in the time variable t A point of interest has time dependent coordinates x = x(t) The optical flow problem is to discover the time evolution of x Assumption: constant intensity: I(x(t), t) = I(x(t) + dx, t + dt) = c Taylor expansions (neglecting higher order terms) yields: Ix1 (x, t) dx1 + Ix2 (x, t) dx2 + It (x, t) dt = 0 In matrix form: Ix1 (x, t) Ix2 (x, t) dx = -It (x, t) dt October 30, 2007 40 / 54 M. Zuliani (Vision Research Lab) Image Registration Point Feature Detection Two Ways to Look at the Problem Short Baseline Correspondences, a.k.a. Optical Flow I = I(, t) is a single channel image sequence parameterized in the time variable t A point of interest has time dependent coordinates x = x(t) The optical flow problem is to discover the time evolution of x Assumption: constant intensity: I(x(t), t) = I(x(t) + dx, t + dt) = c Taylor expansions (neglecting higher order terms) yields: Ix1 (x, t) dx1 + Ix2 (x, t) dx2 + It (x, t) dt = 0 In matrix form: Ix1 (x, t) Ix2 (x, t) dx = -It (x, t) dt October 30, 2007 40 / 54 M. Zuliani (Vision Research Lab) Image Registration Point Feature Detection Two Ways to Look at the Problem Optical Flow: Solving for the Displacement Goal: estimate dx = dx1 dx2 T , i.e. the optical flow vector Problem: Ix1 (x, t) Ix2 (x, t) two unknowns dx = -It (x, t) dt is one equation in Solution: assume that dx1 and dx2 are constant in a region about x. Hence (letting dt = 1): It (y 1 , t) Ix1 (y 1 , t) Ix2 (y 1 , t) . . . . . . dx = - . . . It (y N , t) Ix1 (y N , t) Ix2 (y N , t) Guess what? An overdetermined linear system of equations! M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 41 / 54 Point Feature Detection Two Ways to Look at the Problem Optical Flow: Solving for the Displacement Goal: estimate dx = dx1 dx2 T , i.e. the optical flow vector Problem: Ix1 (x, t) Ix2 (x, t) two unknowns dx = -It (x, t) dt is one equation in Solution: assume that dx1 and dx2 are constant in a region about x. Hence (letting dt = 1): It (y 1 , t) Ix1 (y 1 , t) Ix2 (y 1 , t) . . . . . . dx = - . . . It (y N , t) Ix1 (y N , t) Ix2 (y N , t) Guess what? An overdetermined linear system of equations! M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 41 / 54 Point Feature Detection Two Ways to Look at the Problem Optical Flow: Solving for the Displacement Goal: estimate dx = dx1 dx2 T , i.e. the optical flow vector Problem: Ix1 (x, t) Ix2 (x, t) two unknowns dx = -It (x, t) dt is one equation in Solution: assume that dx1 and dx2 are constant in a region about x. Hence (letting dt = 1): It (y 1 , t) Ix1 (y 1 , t) Ix2 (y 1 , t) . . . . . . dx = - . . . It (y N , t) Ix1 (y N , t) Ix2 (y N , t) Guess what? An overdetermined linear system of equations! M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 41 / 54 Point Feature Detection Two Ways to Look at the Problem Optical Flow: Solving for the Displacement Goal: estimate dx = dx1 dx2 T , i.e. the optical flow vector Problem: Ix1 (x, t) Ix2 (x, t) two unknowns dx = -It (x, t) dt is one equation in Solution: assume that dx1 and dx2 are constant in a region about x. Hence (letting dt = 1): It (y 1 , t) Ix1 (y 1 , t) Ix2 (y 1 , t) . . . . . . dx = - . . . It (y N , t) Ix1 (y N , t) Ix2 (y N , t) Guess what? An overdetermined linear system of equations! M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 41 / 54 Point Feature Detection Two Ways to Look at the Problem Optical Flow: Least Square Solution More compactly: A((x))dx = where = - It (y 1 , t) . . . It (y N , t) Least squares solution recipe: T . multiply both sides by AT to obtain a square system multiply both members by (AT A)-1 to get: dx computed = (AT A)-1 AT = A A major problem: some points give better estimates of the true optical flow than others. M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 42 / 54 Point Feature Detection Two Ways to Look at the Problem Optical Flow: Least Square Solution More compactly: A((x))dx = where = - It (y 1 , t) . . . It (y N , t) Least squares solution recipe: T . multiply both sides by AT to obtain a square system multiply both members by (AT A)-1 to get: dx computed = (AT A)-1 AT = A A major problem: some points give better estimates of the true optical flow than others. M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 42 / 54 Point Feature Detection Two Ways to Look at the Problem Optical Flow: Least Square Solution More compactly: A((x))dx = where = - It (y 1 , t) . . . It (y N , t) Least squares solution recipe: T . multiply both sides by AT to obtain a square system multiply both members by (AT A)-1 to get: dx computed = (AT A)-1 AT = A A major problem: some points give better estimates of the true optical flow than others. M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 42 / 54 Point Feature Detection Two Ways to Look at the Problem Optical Flow: Least Square Solution More compactly: A((x))dx = where = - It (y 1 , t) . . . It (y N , t) Least squares solution recipe: T . multiply both sides by AT to obtain a square system multiply both members by (AT A)-1 to get: dx computed = (AT A)-1 AT = A A major problem: some points give better estimates of the true optical flow than others. M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 42 / 54 Point Feature Detection Two Ways to Look at the Problem Optical Flow: A Thought Experiment Ansatz: the scene is static therefore the true optical flow is zero: dx exact = 0 Suppose the images vary only by additive noise. Then represents the noise itself Error in the optical flow estimate: e = dx exact - dx computed Then: e = dx exact - dx computed = 0 - A = A A A controls the error multiplication factor But we also saw that: K = K (x, ) = A def M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 43 / 54 Point Feature Detection Two Ways to Look at the Problem Optical Flow: A Thought Experiment Ansatz: the scene is static therefore the true optical flow is zero: dx exact = 0 Suppose the images vary only by additive noise. Then represents the noise itself Error in the optical flow estimate: e = dx exact - dx computed Then: e = dx exact - dx computed = 0 - A = A A A controls the error multiplication factor But we also saw that: K = K (x, ) = A def M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 43 / 54 Point Feature Detection Two Ways to Look at the Problem Optical Flow: A Thought Experiment Ansatz: the scene is static therefore the true optical flow is zero: dx exact = 0 Suppose the images vary only by additive noise. Then represents the noise itself Error in the optical flow estimate: e = dx exact - dx computed Then: e = dx exact - dx computed = 0 - A = A A A controls the error multiplication factor But we also saw that: K = K (x, ) = A def M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 43 / 54 Point Feature Detection Two Ways to Look at the Problem Optical Flow: A Thought Experiment Ansatz: the scene is static therefore the true optical flow is zero: dx exact = 0 Suppose the images vary only by additive noise. Then represents the noise itself Error in the optical flow estimate: e = dx exact - dx computed Then: e = dx exact - dx computed = 0 - A = A A A controls the error multiplication factor But we also saw that: K = K (x, ) = A def M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 43 / 54 Point Feature Detection Two Ways to Look at the Problem Optical Flow: A Thought Experiment Ansatz: the scene is static therefore the true optical flow is zero: dx exact = 0 Suppose the images vary only by additive noise. Then represents the noise itself Error in the optical flow estimate: e = dx exact - dx computed Then: e = dx exact - dx computed = 0 - A = A A A controls the error multiplication factor But we also saw that: K = K (x, ) = A def M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 43 / 54 Point Feature Detection Two Ways to Look at the Problem Optical Flow: A Thought Experiment Ansatz: the scene is static therefore the true optical flow is zero: dx exact = 0 Suppose the images vary only by additive noise. Then represents the noise itself Error in the optical flow estimate: e = dx exact - dx computed Then: e = dx exact - dx computed = 0 - A = A A A controls the error multiplication factor But we also saw that: K = K (x, ) = A def M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 43 / 54 Point Feature Detection Two Ways to Look at the Problem Wide Baseline Correspondences: Estimating Local Transformations Consider two corresponding neighborhoods: (x) and (x ) Define the cost function: CT () = def 1 2 y(x) w(y - x) I(y) - I (T ,x (y)) 2 Goal: estimate the parameter vector that minimizes CT (), i.e. : ^ = argmin CT () Rp M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 44 / 54 Point Feature Detection Two Ways to Look at the Problem Wide Baseline Correspondences: Estimating Local Transformations Consider two corresponding neighborhoods: (x) and (x ) Define the cost function: CT () = def 1 2 y(x) w(y - x) I(y) - I (T ,x (y)) 2 Goal: estimate the parameter vector that minimizes CT (), i.e. : ^ = argmin CT () Rp M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 44 / 54 Point Feature Detection Two Ways to Look at the Problem The Right Question (and Hopefully the Right Answer) Which points allow to estimate reliably? Those points such that small amounts of noise will not cause the ^ estimate to be inaccurate Modeling the effect of noise: I (T +,x (y)) = I(y) + Small amounts of should not produce large perturbations Definition (Differential Condition Number for Point Neighborhoods) The condition number associated with the point neighborhood (x) with respect to T ,x is: KT ,x ((x)) = lim sup M. Zuliani (Vision Research Lab) Image Registration def 0 October 30, 2007 45 / 54 Point Feature Detection Two Ways to Look at the Problem The Right Question (and Hopefully the Right Answer) Which points allow to estimate reliably? Those points such that small amounts of noise will not cause the ^ estimate to be inaccurate Modeling the effect of noise: I (T +,x (y)) = I(y) + Small amounts of should not produce large perturbations Definition (Differential Condition Number for Point Neighborhoods) The condition number associated with the point neighborhood (x) with respect to T ,x is: KT ,x ((x)) = lim sup M. Zuliani (Vision Research Lab) Image Registration def 0 October 30, 2007 45 / 54 Point Feature Detection Two Ways to Look at the Problem The Right Question (and Hopefully the Right Answer) Which points allow to estimate reliably? Those points such that small amounts of noise will not cause the ^ estimate to be inaccurate Modeling the effect of noise: I (T +,x (y)) = I(y) + Small amounts of should not produce large perturbations Definition (Differential Condition Number for Point Neighborhoods) The condition number associated with the point neighborhood (x) with respect to T ,x is: KT ,x ((x)) = lim sup M. Zuliani (Vision Research Lab) Image Registration def 0 October 30, 2007 45 / 54 Point Feature Detection Two Ways to Look at the Problem The Right Question (and Hopefully the Right Answer) Which points allow to estimate reliably? Those points such that small amounts of noise will not cause the ^ estimate to be inaccurate Modeling the effect of noise: I (T +,x (y)) = I(y) + Small amounts of should not produce large perturbations Definition (Differential Condition Number for Point Neighborhoods) The condition number associated with the point neighborhood (x) with respect to T ,x is: KT ,x ((x)) = lim sup M. Zuliani (Vision Research Lab) Image Registration def 0 October 30, 2007 45 / 54 Point Feature Detection Two Ways to Look at the Problem The Quantitative Answer Theorem (Estimate of the Differential Condition Number for Point Neighborhoods) The expression for the estimate of the condition number for the point neighborhood (x) is: ^ KT ,x ((x)) = A ((x)) where the matrix A((x)): A(y 1 ) def . mNp . A ((x)) = R . A(y N ) def is formed by the N sub-matrices: A(y i ) = w(y i - x)JI (y i ) J T ,x (y i ) obtained from a set of N points that sample the neighborhood (x) M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 46 / 54 Point Feature Detection Two Ways to Look at the Problem Standpoint Summary "Good points", a.k.a. corners, are related to the (spectral) properties of the generalized gradient matrix: A(y 1 ) def . mNp . A ((x)) = R . A(y N ) where: A(y i ) = w(y i - x)J I(T ,x (y i )) = w(y i - x)JI(y i ) J T ,x (y i ) M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 47 / 54 Point Feature Detection Corner Detectors Spectral Corner Detectors Definition (Spectral Corner Detector) A spectral corner detector is a functional that depends solely on the singular values of the generalized gradient matrix: f : I R2 R (I, x) f ((A((x)))) Common Corner Detectors: Harris-Stephens: fHS = 1 2 - (1 + 2 )2 = det(AT A) - trace(AT A)2 Rohr: fR = 1 2 Noble-Frstner: fNF = Shi-Tomasi: fST = min M. Zuliani (Vision Research Lab) 1 2 1 +2 = det(AT A) trace(AT A) Image Registration October 30, 2007 48 / 54 Point Feature Detection Corner Detectors Spectral Corner Detectors Definition (Spectral Corner Detector) A spectral corner detector is a functional that depends solely on the singular values of the generalized gradient matrix: f : I R2 R (I, x) f ((A((x)))) Common Corner Detectors: Harris-Stephens: fHS = 1 2 - (1 + 2 )2 = det(AT A) - trace(AT A)2 Rohr: fR = 1 2 Noble-Frstner: fNF = Shi-Tomasi: fST = min M. Zuliani (Vision Research Lab) 1 2 1 +2 = det(AT A) trace(AT A) Image Registration October 30, 2007 48 / 54 Point Feature Detection Corner Detectors Condition Number Corner Detectors Definition (Condition Number Corner Detector) A condition number corner detector is a spectral corner detector such that: f : I R2 R (I, x) 1 A ((x)) 2 S,2q Definition (Schatten Matrix q-norm) The Schatten matrix q-norm is defined as: A def 1 q S,q = i (A) i q where i (A) is the i th singular value of the matrix A. M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 49 / 54 Point Feature Detection Corner Detectors Condition Number Corner Detectors Definition (Condition Number Corner Detector) A condition number corner detector is a spectral corner detector such that: f : I R2 R (I, x) 1 A ((x)) 2 S,2q Definition (Schatten Matrix q-norm) The Schatten matrix q-norm is defined as: A def 1 q S,q = i (A) i q where i (A) is the i th singular value of the matrix A. M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 49 / 54 Point Feature Detection Corner Detectors Putting Everything Together Theorem (Corner Detectors Equivalence Relations) The following interesting relations hold among the spectral corner detectors when the transformation T ,x models a simple translation: Generalized Rohr equivalence: limq0 q pfK ,q = fR Generalized Noble-Frstner equivalence: fK ,1 = fNF Generalized Shi-Tomasi equivalence: fK , = fST Theorem (Analytical Bounds) Translation RST Affine fK ,q fK ,q fK ,q M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 50 / 54 Point Feature Detection Corner Detectors Noble-Frstner Reponse for Different T ,x fN F , translation 50 100 150 200 250 300 350 50 100 150 200 250 300 350 400 450 500 550 50 100 150 200 250 300 350 50 100 150 200 250 300 350 400 450 500 550 fN F , RST fN F , affine 50 100 150 200 250 300 350 50 100 150 200 250 300 350 400 450 500 550 50 100 150 200 250 300 350 50 100 150 200 250 300 350 400 450 500 550 M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 51 / 54 Point Feature Detection Corner Detectors Noble-Frstner Reponse for Different T ,x x = 250 10 5 50 100 10 4 10 3 150 x 200 250 300 350 log fN F 10 2 1 10 translation RST affine 10 0 50 100 150 200 250 300 y 350 400 450 500 550 50 100 150 200 250 300 y 350 400 450 500 550 M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 52 / 54 Point Feature Detection Corner Detectors Homework Write a Matlab function to detect the corners in an arbitrary gray level image using the Noble-Frstner detector. The syntax of the function should be [x y f] = compute_corners(I, sigma, r), where: I is the single channel input image. sigma is the standard deviation of the Gaussian differentiation filter in pixels r is the radius of the circular neighborhood (x) x, y is the position of the interest points f is the detector map, i.e. the reponse of the detector at each location of the image M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 53 / 54 Point Feature Detection Corner Detectors Protecting Luca's Mental Health A necessary (but not sufficient) condition to complete the assignment is that your function will satisfy the following testing protocol: The workspace will contain the image I and the variables of sigma, r The command [x y f] = compute_corners(I, sigma, r); will be issued The results will be evaluated superimposing the detected point on the original image and displaying the detector map: figure imshow(I); hold on; plot(y, x, 'r+'); figure imagesc(f); axis equal tight; colormap gray; M. Zuliani (Vision Research Lab) Image Registration October 30, 2007 54 / 54 ...
View Full Document

This note was uploaded on 12/28/2011 for the course ECE 124d taught by Professor Staff during the Fall '08 term at UCSB.

Ask a homework question - tutors are online