paper_long

paper_long - Unpublished Manuscript Please do not copy,...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Unpublished Manuscript Please do not copy, distribute, or disseminate via elctronic means New Insight into Optical-Flow Computation with Regularization Tomy Tsai, Dan Koppel, and Yuan-Fang Wang Department of Computer Science University of California { tomyt,dkoppel,yfwang } @cs.ucsb.edu January 10, 2007 Abstract This paper reports a technique that improves the robustness and accuracy in computing dense optical- flow fields, using a global formulationwith a regularizationterm. It is shown that while many regularizers have been proposed (image-driven, flow-driven, homogeneous, inhomogeneous, isotropic, anisotropic), they are all variances of a single base expression u u T + v v T . These regularizers, strictly speaking, are valid for the 2D translational motion only, because what they do essentially is to pe- nalize changes in the flow field. However, many flow patterns—such as rotation, zoom, and their combinations—induced by a 3D rigid-body motion are not constant. The traditional regularizers then incorrectly penalize these legal flow patterns and result in biased estimates. This purpose of the work is then to derive a new suite of regularization expressions that treat all valid flow patterns resulting from a 3D rigid-body motion equally, without unfairly penalizing any of them. 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
1 Introduction Our research addresses the problem of computing the optical-flow fields induced by rigid-body motions in space. The proposed framework is applicable to the scenarios where a static scene is viewed by a moving camera or observer, such as in the mobile robotic applications. It is equally applicable when both the camera and the objects in the scene move independently, but rigidly. Computing the optical-flow fields from video is a topic that has been investigated extensively over the past two decades. Voluminous results have been reported on this topic. (The reference section contains a partial list. Some slightly dated surveys are found in [2, 4, 43, 60].) Hence, before yet another technique is proposed, it is important to understand the available options and identify areas needing further improvement, if any. Our first task is therefore to provide a taxonomy of the existing methods for computing the optical flow (Fig. 1), and explain and contrast these methods in some detail. (The discussions are indexed by the box numbers in Fig. 1) Figure 1: A taxonomy of a variety of algorithms for computing the optical-flow fields. Boxes A and B. An optical-flow field is a dense 2D displacement field in images u ( x, y ) = [ u ( x, y ) , v ( x, y )] T . 1 Assuming that the pixel intensity is preserved over time (the assumption can be relaxed, e.g., in [23, 48]), the basic formulation of computing the optical-flow field is to minimize the discrepancy between the intensities of the matched pixels in the adjacent frames e = ZZ Ω ( I ( x, y, t ) - I ( x + u, y + v, t + 1)) 2 dxdy , (1) where Ω refers to the whole image plane. Taking the Taylor-series expansion of
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 08/06/2008 for the course CS 595I taught by Professor Wang during the Winter '07 term at UCSB.

Page1 / 22

paper_long - Unpublished Manuscript Please do not copy,...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online