This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Style Translation for Human Motion Eugene Hsu 1 Kari Pulli 1 , 2 Jovan Popovic 1 1 Massachusetts Institute of Technology 2 Nokia Research Center Figure 1: Our style translation system transforms a normal walk (TOP) into a sneaky crouch (MIDDLE) and a sideways shuffle (BOTTOM). Abstract Style translation is the process of transforming an input motion into a new style while preserving its original content. This problem is motivated by the needs of interactive applications, which require rapid processing of captured performances. Our solution learns to translate by analyzing differences between performances of the same content in input and output styles. It relies on a novel corre- spondence algorithm to align motions, and a linear time-invariant model to represent stylistic differences. Once the model is esti- mated with system identification, our system is capable of translat- ing streaming input with simple linear operations at each frame. CR Categories: I.3.7 [Computer Graphics]: Three Dimensional Graphics and RealismAnimation Keywords: Human Simulation, Data Mining, Machine Learning 1 Introduction Style is a vital component of character animation. In the context of human speech, the delivery of a phrase greatly affects its meaning. The same can be said for human motion. Even for basic actions such as locomotion, the difference between a graceful strut and a defeated limp has a large impact on the tone of the final anima- tion. Consequently, many applications of human animation, such as feature films and video games, often require large data sets that contain many possible combinations of actions and styles. Building such data sets places a great burden on the actors that perform the motions and the technicians that process them. Style translation alleviates these issues by enabling rapid trans- formation of human motion into different styles, while preserving content. This allows numerous applications. A database of nor- mal locomotions, for instance, could be translated into crouching and limping styles while retaining subtle content variations such as turns and pauses. Our system can also be used to extend the ca- pabilities of techniques that rearrange motion capture clips to gen- erate novel content. Additionally, our style translation method can quickly process streaming motion data, allowing its use in interac- tive applications that demand time and space efficiency. It is often difficult to describe the desired translation procedu- rally; thus, an important objective of this work is to infer the ap- propriate translations from examples. A style translation model is trained by providing matching motions in an input and an out- put style. To learn the translation from a normal to a limping gait, for instance, one might pair three steps of a normal gait with three steps of a limping one. Their content is identical, but their style comprises the spatial and temporal variations. In practice, such ex- amples can be easily selected from longer performances.amples can be easily selected from longer performances....
View Full Document
This note was uploaded on 05/20/2011 for the course CAP 6701 taught by Professor Staff during the Spring '08 term at University of Florida.
- Spring '08