8 - Face animation
from PART III - APPLICATIONS
Published online by Cambridge University Press: 01 June 2011
Summary
Avatars are commonly used in virtual world and game applications. Examples include online games, Internet chat rooms, virtual communities, simulation training systems, and so on. Typically, these systems allow the users to choose an avatar as their identities in the virtual world. One interesting application of the face-modeling technology is to allow the users to build their personalized avatars and import the avatars in the virtual worlds. Face-relighting techniques can be used to make the face look consistent with the lighting conditions of the virtual worlds.
After we obtain a face model, we need to animate it to make the model look alive. In this chapter, we describe techniques to animate a face model including speech-synchronized animation and facial expression synthesis.
Talking head
The ability to talk is a basic requirement for a face animation system. The audio track can be either synthesized signals or recorded real speech. There are two different ways to drive the talking head: text or audio. If we use text, we need to synthesize the audio based on the text. This is done by a text-to-speech (TTS) system [97]. In addition to generating the audio signals, a TTS system also outputs the phonemes in real time as the audio is played. The face animation system needs to generate the mouth shapes that are consistent with the phonemes. This is called lip synchronization or lip sync in short.
- Type
- Chapter
- Information
- Face Geometry and Appearance ModelingConcepts and Applications, pp. 149 - 180Publisher: Cambridge University PressPrint publication year: 2011