Skip to main content
Figure 3 | EURASIP Journal on Audio, Speech, and Music Processing

Figure 3

From: Lip-Synching Using Speaker-Specific Articulation, Shape and Appearance Models

Figure 3

Some elementary articulations for the face and the head that statistically emerge from the motion capture data of speaker CD using guided PCA. Note that a nonlinear model of the head/neck joint is also parameterized. The zoom at the right-hand side shows that the shape model includes a detailed geometry of the lip region: a lip mesh that is positioned semiautomatically using a generic lip model [12] as well as a mesh that fills the inner space. This later mesh attaches the inner lip contour to the ridge of the upper teeth: there is no further attachment to other internal organs (lower teeth, tongue, etc.).

Back to article page