Lu-SoundSense - SoundSense: Scalable Sound Sensing for...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: SoundSense: Scalable Sound Sensing for People-Centric Applications on Mobile Phones Hong Lu, Wei Pan, Nicholas D. Lane, Tanzeem Choudhury and Andrew T. Campbell Department of Computer Science Dartmouth College {hong, pway, niclane, campbell}@cs.dartmouth.edu, tanzeem.choudhury@dartmouth.edu ABSTRACT Top end mobile phones include a number of specialized (e.g., accelerometer, compass, GPS) and general purpose sensors (e.g., microphone, camera) that enable new people-centric sensing applications. Perhaps the most ubiquitous and un- exploited sensor on mobile phones is the microphone a powerful sensor that is capable of making sophisticated in- ferences about human activity, location, and social events from sound. In this paper, we exploit this untapped sensor not in the context of human communications but as an en- abler of new sensing applications. We propose SoundSense, a scalable framework for modeling sound events on mobile phones. SoundSense is implemented on the Apple iPhone and represents the first general purpose sound sensing sys- tem specifically designed to work on resource limited phones. The architecture and algorithms are designed for scalability and SoundSense uses a combination of supervised and unsu- pervised learning techniques to classify both general sound types (e.g., music, voice) and discover novel sound events specific to individual users. The system runs solely on the mobile phone with no back-end interactions. Through im- plementation and evaluation of two proof of concept people- centric sensing applications, we demostrate that SoundSense is capable of recognizing meaningful sound events that occur in users everyday lives. Categories and Subject Descriptors C.3 [ Special-Purpose and Application-Based Systems ]: Real-time and embedded systems General Terms Algorithms, Design, Experimentation, Human Factors, Mea- surement, Performance Keywords Audio Processing, Mobile Phones, Sound Classification, People- centric Sensing, Urban Sensing Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. MobiSys09, June 2225, 2009, Krakw, Poland. Copyright 2009 ACM 978-1-60558-566-6/09/06 ...$5.00. 1. INTRODUCTION We are on the brink of a new era in the development of the ubiquitous mobile phone. Many top end mobile phones [28] [4] [17] now come with GPS, WiFi and cellular localiza- tion and embedded sensors (e.g., digital compass, proxim- ity sensors, and accelerometers). These mobile devices are leading to the emergence of new applications in health care, gaming, social networks, and recreational sports - enabling people-centric sensing applications [11] [1] [10]. Perhaps onepeople-centric sensing applications [11] [1] [10]....
View Full Document

This note was uploaded on 08/25/2011 for the course EEL 6788 taught by Professor Boloni,l during the Spring '08 term at University of Central Florida.

Page1 / 14

Lu-SoundSense - SoundSense: Scalable Sound Sensing for...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online