This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 55, NO. 3, MARCH 2008 1385 Multimodal Approach to Human-Face Detection and Tracking Prahlad Vadakkepat, Senior Member, IEEE , Peter Lim, Liyanage C. De Silva, Liu Jing, and Li Li Ling Abstract —The constructive need for robots to coexist with hu- mans requires human–machine interaction. It is a challenge to op- erate these robots in such dynamic environments, which requires continuous decision-making and environment-attribute update in real-time. An autonomous robot guide is well suitable in places such as museums, libraries, schools, hospital, etc. This paper addresses a scenario where a robot tracks and follows a human. A neural network is utilized to learn the skin and nonskin colors. The skin-color probability map is utilized for skin classification and morphology-based preprocessing. Heuristic rule is used for face-ratio analysis and Bayesian cost analysis for label classifi- cation. A face-detection module, based on a 2-D color model in the Y C r C b and YUV color space, is selected over the traditional skin-color model in a 3-D color space. A modified Continuously Adaptive Mean Shift tracking mechanism in a 1-D Hue, Satura- tion, and Value color space is developed and implemented onto the mobile robot. In addition to the visual cues, the tracking process considers 16 sonar scan and tactile sensor readings from the robot to generate a robust measure of the person’s distance from the robot. The robot thus decides an appropriate action, namely, to follow the human subject and perform obstacle avoidance. The proposed approach is orientation invariant under varying light- ing conditions and invariant to natural transformations such as translation, rotation, and scaling. Such a multimodal solution is effective for face detection and tracking. Index Terms —Continuously adaptive mean shift (CAMSHIFT) tracking mechanism, face tracking, facial skin-color model, multimodal approach. I. INTRODUCTION M OBILE ROBOTS are increasingly being integrated into our daily lives. In order to provide the basic navigational ability to the robot and to study the coarse structure of the environment, visual, sonar, ultrasonic, infrared, and other range sensors are required. The robots have to acquire information about the environment through various sensors. Nevertheless, the dynamic nature of the environment and the need to interact with the users have set requirements that are more challenging in robot perception. In this paper, the research focus is put on Manuscript received September 19, 2006; revised June 7, 2007. P. Vadakkepat, P. Lim, and L. Jing are with the Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117576 (e-mail: [email protected]; [email protected])....
View Full Document
- Spring '10
- Color space, face detection, YUV Color Space, Cr Cb