Unformatted text preview: 2. A trajectory synthesis technique based on regularization, which is automatically trained from the recorded video corpus and which is capable of synthesizing trajectories in MMM space corresponding to any desired utterance. Results are presented on re-animating human subjects and celebrities. Results are also presented on a series of numerical and psychophysical experiments designed to evaluate the synthetic animations. Suggested Reading • Ezzat, Geiger and Poggio. Trainable Videorealistic Speech Animation. Proceedings of ACM SIGGRAPH 2002, San Antonio, Texas, July 2002. • Bregler, Covell, and Slaney. Video Rewrite: Driving Visual Speech with Audio Proceedings of ACM SIGGRAPH, 1997 (1997)...
View Full Document
- Spring '04
- Machine Learning, Mikhail Bakhtin, Speech processing, Association for Computing Machinery, Utterance, ACM SIGGRAPH