icra01 - Vision-based Mobile Robot Localization And Mapping...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
Vision-based Mobile Robot Localization And Mapping using Scale-Invariant Features Stephen Se, David Lowe, Jim Little Department of Computer Science University of British Columbia Vancouver, B.C. V6T 1Z4, Canada { se,lowe,little } @cs.ubc.ca Abstract A key component of a mobile robot system is the ability to localize itself accurately and build a map of the environment simultaneously. In this paper, a vision-based mobile robot localization and mapping al- gorithm is described which uses scale-invariant image features as landmarks in unmodified dynamic environ- ments. These 3D landmarks are localized and robot ego-motion is estimated by matching them, taking into account the feature viewpoint variation. With our Tri- clops stereo vision system, experiments show that these features are robustly matched between views, 3D land- marks are tracked, robot pose is estimated and a 3D map is built. 1 Introduction Mobile robot localization and mapping, the process of simultaneously tracking the position of a mobile robot relative to its environment and building a map of the environment, has been a central research topic for the past few years. Accurate localization is a prerequi- site for building a good map, and having an accurate map is essential for good localization. Therefore, Si- multaneous Localization And Map Building (SLAMB) is a critical underlying factor for successful mobile robot navigation in a large environment, irrespective of the higher-level goals or applications. To achieve SLAMB, there are different types of sensor modalities such as sonar, laser range finders and vision. Many early successful approaches [2] uti- lize artificial landmarks, such as bar-code reflectors, ultrasonic beacons, visual patterns, etc., and there- fore do not function properly in beacon-free environ- ments. Vision-based approaches using stable natural landmarks in unmodified environments are highly de- sirable for a wide range of applications. Harris’s 3D vision system DROID [8] uses the vi- sual motion of image corner features for 3D recon- struction. Kalman filters are used for tracking features from which it determines both the camera motion and the 3D positions of the features. It is accurate in the short to medium term, but long-term drifts can oc- cur. The ego-motion and the perceived 3D structure can be self-consistently in error. It is an incremental algorithm and it runs at near real-time. A stereo vision algorithm for mobile robot mapping and navigation is proposed in [13], where a 2D occu- pancy grid map is built from the stereo data. However, since the robot does not localize itself using the map, odometry error is not corrected and hence the map may drift over time. [10] proposed combining this 2D occupancy map with sparse 3D landmarks for robot localization, and corners on planar objects are used as stable landmarks. However, landmarks are used for matching only in the next frame but not kept for matching subsequent frames. Markov localization was employed by various teams
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 8

icra01 - Vision-based Mobile Robot Localization And Mapping...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online