chi-09.2 - User-Defined Gestures for Surface Computing...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
User-Defined Gestures for Surface Computing Jacob O. Wobbrock The Information School DUB Group University of Washington Seattle, WA 98195 USA [email protected] Meredith Ringel Morris, Andrew D. Wilson Microsoft Research One Microsoft Way Redmond, WA 98052 USA { merrie, awilson } ABSTRACT Many surface computing prototypes have employed gestures created by system designers. Although such gestures are appropriate for early investigations, they are not necessarily reflective of user behavior. We present an approach to designing tabletop gestures that relies on eliciting gestures from non-technical users by first portraying the effect of a gesture, and then asking users to perform its cause . In all, 1080 gestures from 20 participants were logged, analyzed, and paired with think-aloud data for 27 commands performed with 1 and 2 hands. Our findings indicate that users rarely care about the number of fingers they employ, that one hand is preferred to two, that desktop idioms strongly influence users’ mental models, and that some commands elicit little gestural agreement, suggesting the need for on-screen widgets. We also present a complete user-defined gesture set, quantitative agreement scores, implications for surface technology, and a taxonomy of surface gestures. Our results will help designers create better gesture sets informed by user behavior. Author Keywords: Surface, tabletop, gestures, gesture recognition, guessability, signs, referents, think-aloud. ACM Classification Keywords: H.5.2. Information interfaces and presentation: User Interfaces – Interaction styles , evaluation/methodology , user-centered design . INTRODUCTION Recently, researchers in human-computer interaction have been exploring interactive tabletops for use by individuals [29] and groups [17], as part of multi-display environments [7], and for fun and entertainment [31]. A key challenge of surface computing is that traditional input using the keyboard, mouse, and mouse-based widgets is no longer preferable; instead, interactive surfaces are typically controlled via multi-touch freehand gestures. Whereas input devices inherently constrain human motion for meaningful human-computer dialogue [6], surface gestures are versatile and highly varied—almost anything one can do with one’s Figure 1. A user performing a gesture to pan a field of objects after being prompted by an animation demonstrating the panning effect. hands could be a potential gesture. To date, most surface gestures have been defined by system designers, who personally employ them or teach them to user-testers [14,17,21,27,34,35]. Despite skillful design, this results in somewhat arbitrary gesture sets whose members may be chosen out of concern for reliable recognition [19]. Although this criterion is important for early prototypes, it is not useful for determining which gestures match those that would be chosen by users. It is therefore timely to consider the types of surface gestures people make without regard for recognition or technical concerns.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 02/13/2012 for the course CS 91.550 taught by Professor Yanco during the Spring '11 term at UMass Lowell.

Page1 / 10

chi-09.2 - User-Defined Gestures for Surface Computing...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online