10.1.1.67.7230 - Model-Based Face De-Identification Ralph...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
Model-Based Face De-Identification Ralph Gross, Latanya Sweeney Data Privacy Lab, School of Computer Science Carnegie Mellon University, USA rgross@cs.cmu.edu, latanya@privacy.cs.cmu.edu Fernando de la Torre, Simon Baker Robotics Institute Carnegie Mellon University, USA { ftorre, simonb } @cs.cmu.edu Abstract Advances in camera and computing equipment hardware in recent years have made it increasingly simple to capture and store extensive amounts of video data. This, among other things, creates ample opportunities for the sharing of video sequences. In order to protect the privacy of subjects visible in the scene, automated methods to de-identify the images, particularly the face region, are necessary. So far the majority of privacy protection schemes currently used in practice rely on ad-hoc methods such as pixelation or blur- ring of the face. In this paper we show in extensive experi- ments that pixelation and blurring offers very poor privacy protection while significantly distorting the data. We then introduce a novel framework for de-identifying facial im- ages. Our algorithm combines a model-based face image parameterization with a formal privacy protection model. In experiments on two large-scale data sets we demonstrate privacy protection and preservation of data utility. 1. Introduction Due to the continuously falling costs of video capture equipment, it is becoming possible to record, store and pro- cess large quantities of video data. As a consequence, an increasing number of research projects aim at continuously observing and monitoring people in private spaces. The Caremedia project at CMU for example captures and an- alyzes video data recorded in a nursing home facility to support medical personnel in diagnosing and treating be- havioral problems of the elderly [5]. The Aware Home project at Georgia Tech equipped a house with an exten- sive sensor network (including video cameras) with a simi- lar goal of monitoring elderly people [1]. Privacy concerns of non-consenting subjects however limit the abilities of re- searchers to exchange raw data and often require labor in- tensive manual post-processing to remove portions of the data. These are examples of a growing number of applica- tions in which valuable video data can not be shared due to fear of re-identification. Out of this situation the need for automatic methods to remove identifying information from images, particulary the face region, arises. The goal is to remove as much identifying information as necessary while preserving as much of the data utility as possible. Previous work on de-identifying facial images falls in one of two categories: ad-hoc methods such as “blurring” or “pixelation” [20] or formal methods such as k -Same [21] or k -Same-Select [12]. Both types of approaches have short- comings which we address in this paper. We first propose a new algorithm, k -Same-M, which combines a model- based face parameterization with a formal privacy protec- tion model. We demonstrate that the algorithm achieves
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 8

10.1.1.67.7230 - Model-Based Face De-Identification Ralph...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online