- Research Article
- Open Access
SynFace—Speech-Driven Facial Animation for Virtual Speech-Reading Support
© Giampiero Salvi et al. 2009
Received: 13 March 2009
Accepted: 23 September 2009
Published: 16 November 2009
This paper describes SynFace, a supportive technology that aims at enhancing audio-based spoken communication in adverse acoustic conditions by providing the missing visual information in the form of an animated talking head. Firstly, we describe the system architecture, consisting of a 3D animated face model controlled from the speech input by a specifically optimised phonetic recogniser. Secondly, we report on speech intelligibility experiments with focus on multilinguality and robustness to audio quality. The system, already available for Swedish, English, and Flemish, was optimised for German and for Swedish wide-band speech quality available in TV, radio, and Internet communication. Lastly, the paper covers experiments with nonverbal motions driven from the speech signal. It is shown that turn-taking gestures can be used to affect the flow of human-human dialogues. We have focused specifically on two categories of cues that may be extracted from the acoustic signal: prominence/emphasis and interactional cues (turn-taking/back-channelling).
For a hearing impaired person, and for a normal hearing person in adverse acoustic conditions, it is often necessary to be able to lip-read as well as hear the person they are talking with in order to communicate successfully. Apart from the lip movements, nonverbal visual information is also essential to keep a normal flow of conversation. Often, only the audio signal is available, for example, during telephone conversations or certain TV broadcasts. The idea behind SynFace is to try to recreate the visible articulation of the speaker, in the form of an animated talking head. The visual signal is presented in synchrony with the acoustic speech signal, which means that the user can benefit from the combined synchronised audiovisual perception of the original speech acoustics and the resynthesised visible articulation. When compared to video telephony solutions, SynFace has the distinct advantage that only the user on the receiving end needs special equipment—the speaker at the other end can use any telephone terminal and technology: fixed, mobilem, or IP-telephony.
Several methods have been proposed to drive the lip movements of an avatar from the acoustic speech signal with varying synthesis models and acoustic-to-visual maps. Tamura et al.  used hidden Markov models (HMMs) that are trained on parameters that represent both auditory and visual speech features. Similarly, Nakamura and Yamamoto  propose to estimate the audio-visual joint probability using HMMs. Wen et al.  extract the visual information from the output of a formant analyser. Al Moubayed et al.  map from the lattice output of a phonetic recogniser to texture parameters using neural networks. Hofer et al.  used trajectory hidden Markov models to predict visual speech parameters from an observed sequence.
Most existing approaches to acoustic-to-visual speech mapping can be categorised as either regression based or classification based. Regression-based systems try to map features of the incoming sounds into continuously varying articulatory (or visual) parameters. Classification-based systems, such as SynFace, consider an intermediate phonetic level, thus solving a classification problem, and generating the final face parameters with a rule-based system. This approach has proved to be more appropriate when the focus is on a real-life application, where additional requirements are to be met, for example, speaker independence, and low-latency. Ohman and Salvi  compared two examples of the two paradigms. A time-delayed neural network was used to estimate the face parameter trajectories from spectral features of speech, whereas an HMM phoneme recogniser was used to extract the phonetic information needed to drive the rule-based visual synthesis system. Although the results are dependent on our implementation, we observed that the first method could learn the general trend of the parameter trajectories, but was not accurate enough to provide useful visual information. The same is also observed in Hofer et al.  and Massaro et al. . (Although some speech-reading support was obtained for isolated words from a single speaker in Massaro's paper, this result did not generalise well to extemporaneous speech from different speakers (which is indeed one of the goals of SynFace).) The second method resulted in large errors in the trajectories in case of misrecognition, but provided, in general, more reliable results.
As for the actual talking head image synthesis, this can be produced using a variety of techniques, typically based on manipulation of video images [8, 9] parametrically deformable models of the human face and/or speech organs [10, 11] or as a combination thereof . In our system we employ a deformable 3D model (see Section 2) for reasons of speed and simplicity.
This paper summarises the research that led to the development of the SynFace system and discusses a number of aspects involved in its development, along with novel experiments in multilinguality, dependency on the quality of the speech input, and extraction of nonverbal gestures from the acoustic signal.
The SynFace architecture is described for the first time as a whole in Section 2; Section 3 describes the additional nonverbal gestures. Experiments in German and with wide-band speech quality are described in Section 4. Finally, Section 5 discusses and concludes the paper.
2. SynFace Architecture
The processing chain in SynFace is illustrated in Figure 2. SynFace employs a specially developed real-time phoneme recognition system, that delivers information regarding the speech signal-to-a speech animation module that renders the talking face on the computer screen using 3D graphics. The total delay from speech input to animation is only about 200 milliseconds, which is low enough not to disturb the flow of conversation, (e.g., ). However, in order for face and voice to be perceived coherently, the acoustic signal also has to be delayed by the same amount .
The control model is based on the rule-based look-ahead model proposed by Beskow , but modified for low-latency operation. In this model, each phoneme is assigned a target vector of articulatory control parameters. To allow the targets to be influenced by coarticulation, the target vector may be under-specified, that is, some parameter values can be left undefined. If a target is left undefined, the value is inferred from context using interpolation, followed by smoothing of the resulting trajectory. As an example, consider the lip rounding parameter in a utterance where is an unrounded vowel, represents a consonant cluster and is a rounded vowel. According to the rules set, lip rounding would be unspecified for the consonants, leaving these targets to be determined from the vowel context by linear interpolation from the unrounded , across the consonant cluster, to the rounded .
To allow for low-latency operation, the look-ahead model has been modified by limiting the look-ahead time window (presently a value of 100 milliseconds is used) which means that no anticipatory coarticulation beyond this window will occur.
For comparison, the control model has also been evaluated against several data-driven schemes . In these experiments, different models are implemented and trained to reproduce the articulatory patterns of a real speaker, based on a corpus of optical measurements. Two of the models, (Cohen-Massaro and Ohman) are based on coarticulation models from speech production theory and one uses artificial neural networks (ANNs). The different models were evaluated through a perceptual intelligibility experiment, where the data-driven models were compared against the rule-based model as well as an audio-alone condition. In order to only evaluate the control models, and not the recognition, the phonetic input to all models was generated using forced alignment Sjolander . Also, since the intent was a general comparison of the relative merits of the control models, that is, not only for real time applications, no low-latency constraints were applied in this evaluation. This means that all models had access to all segments in each utterance, but in practise the models differ in their use of look-ahead information. The "Cohen-Massaro" model by design always uses all segments; the "Ohman" model looks ahead until the next upcoming vowel; while the ANN model, which was specially conceived for low-latency operation, used a constant look-ahead of 50 milliseconds.
Summary of intelligibility test of visual speech synthesis control models, from Beskow .
% keywords correct
2.2. Phoneme Recognition
The constraints imposed on the phoneme recogniser (PR) for this application are speaker independence, task independence and low latency. However, the demands on the PR performance are limited by the fact that some phonemes map to the same visemes (targets) for synthesis.
The phoneme recogniser used in SynFace, is based on a hybrid of recurrent neural networks (RNNs) and hidden Markov models (HMMs) . Mel frequency cepstral coefficients (MFCCs) are extracted on 10 milliseconds spaced frames of speech samples. The neural networks are used to estimate the posterior probabilities of each phonetic class given a number of feature vectors in time . The networks are trained using Back Propagation through time  with a cross-entropy error measure . This ensures an approximately linear relation between the output activities of the RNN and the posterior probabilities of each phonetic class, given the input observation. As in Strom , a mixture of time delayed and recurrent connections is used. All the delays are positive, ensuring that no future context is used and thus reducing the total latency of the system at the cost of slightly lower recognition accuracy.
The posterior probabilities estimated by the RNN are fed into an HMM with the main purpose of smoothing the results. The model defines a simple loop of phonemes, where each phoneme is a left-to-right three-state HMM. A slightly modified Viterbi decoder is used to allow low-latency decoding. Differently from the RNN model, the decoder makes use of some future context (look-ahead). The amount of look-ahead is one of the parameters that can be controlled in the algorithm.
During the Synface project (IST-2001-33327), the recogniser was trained and evaluated on the SpeechDat recordings  for three languages: Swedish, English and Flemish. In Salvi [20, 26], the effect of limiting the look-ahead in the Viterbi decoder was studied. No improvements in the results were observed for look-ahead lengths greater than 100 milliseconds. In the SynFace system, the look-ahead length was further limited to 30 milliseconds, resulting in a relative 4% drop in performance in terms of correct frames.
3. Nonverbal Gestures
While enhancing speech perception through visible articulation has been the main focus of SynFace, recent work has been aimed at improving the overall communicative experience through nonarticulatory facial movements. It is well known that a large part of information transfer in face-to-face interaction is nonverbal, and it has been shown that speech intelligibility is also affected by nonverbal actions such as head movements . However, while there is a clear correlation between the speech signal and the articulatory movements of the speaker that can be exploited for driving the face articulation, it is less clear how to provide meaningful nonarticulatory movements based solely on the acoustics. We have chosen to focus on two classes of nonverbal movements that have found to play important roles in communication and that also may be driven by acoustic features that can be reliably estimated from speech. The first category is speech-related movements linked to emphasis or prominence, the second category is gestures related to interaction control in a dialogue situation. For the time being, we have not focused on expressiveness of the visual synthesis in terms of emotional content as in Cao et al. .
Hadar et al.  found that increased head movement activity co-occurs with speech, and Beskow et al.  found, by analysing facial motion for words in focal and nonfocal position, that prominence is manifested visually in all parts of the face, and that the particular realisation chosen is dependent on the context. In particular these results suggest that there is not one way of signalling prominence visually but it is likely that several cues are used interchangeably or in combination. One issue that we are currently working on is how to reliably extract prominence based on the audio signal alone, with the goal of driving movements in the talking head. In a recent experiment Al Moubayed et al.  it was shown that adding small eyebrow movements on syllables with large pitch movements, resulted in a significant intelligibility improvement over the articulation-only condition, but less so than a condition where manually labelled prominence was used to drive the gestures.
When people are engaged in face-to-face conversation, they take a great number of things into consideration in order to manage the flow of the interaction. We call this interaction control—the term is wider than turn-taking and does not presuppose the existence of "turns." Examples of features that play a part in interaction control include auditory cues such as pitch, intensity, pause and disfluency, hyper-articulation; visual cues such as gaze, facial expressions, gestures, and mouth movements (constituting the regulators category above) and cues like pragmatic, semantic, and syntactic completeness.
In order to investigate the effect of visual interaction control cues in a speech driven virtual talking head, we conducted an experiment with human-human interaction over a voice connection supplemented by the SynFace talking head at each end, where visual interaction control gestures were automatically controlled from the audio stream. The goal of the experiment was to find out to what degree subjects were affected by the interaction control cues. In what follows is a summary, for full details see Edlund and Beskow .
In the experiment, a bare minimum of gestures was implemented that can be said to represent a stylised version of the gaze behaviours observed by Kendon  and recent gaze-tracking experiments .
(i)A turn-taking/keeping gesture, where the avatar makes a slight turn of the head to the side in combination with shifting the gaze away a little, signalling a wish to take or keep the floor.
(ii)A turn-yielding/listening gesture, where the avatar looks straight forward, at the subject, with slightly raised eyebrows, signalling attention and willingness to listen.
(iii)A feedback/agreement gesture, consisting of a small nod. In the experiment described here, this gesture is never used alone, but is added at the end of the listening gesture to add to its responsiveness. In the following, simply assume it is present in the turn yielding/listening gesture.
The audio-signal from each participant was processed by a voice activity detector (VAD). The VAD reports a change to the SPEECH state each time it detected a certain number of consecutive speech frames whilst in the SILENCE state, and vice-versa. Based on these state transitions, gestures were triggered in the respective SynFace avatar.
To be able to assess the degree to which subjects were influenced by the gestures, the avatar on each side could work in one of two modes: ACTIVE or PASSIVE. In the ACTIVE mode, gestures were chosen as to encourage one party to take and keep turns, while PASSIVE mode implied the opposite—to discourage the user to speak. In order to collect balanced data of the two participants behaviour, the modes were shifted regularly (every 10 turns), but they were always complementary—ACTIVE on one side and PASSIVE on the other. The number 10 was chosen to be small enough to make sure that both parties got exposed to both modes several times during the test (10 minutes), but large enough to allow subjects to accommodate to the situation.
The subjects were placed in separate rooms and equipped with head-sets connected to a Voice-over-IP call. On each side, the call is enhanced by the SynFace animated talking head representing the other participant, providing real-time lip-synchronised visual speech animation. The task was to speak about any topic freely for around ten minutes. There were 12 participants making up 6 pairs. None of the participants had any previous knowledge of the experiment setup.
The results were analysed by counting the percentage of times that the turn changed when a speaker paused. The percentage of all utterances followed by a turn change is larger under the PASSIVE condition than under the ACTIVE condition for each participant without exception. The difference is significant ( ), which shows that subjects were consistently affected by the interaction control cues in the talking head. As postinterviews revealed that most subjects never even noticed the gestures consciously, and no subject connected them directly to interaction control, this result shows that it is possible to unobtrusively influence the interaction behaviour of two interlocutors in a given direction—that is to make a person take the floor more or less often—by way of facial gestures in an animated talking head in the role of an avatar.
4. Evaluation Experiments
In the SynFace application, speech intelligibility enhancement is the main function. Speech reading and audio-visual speech intelligibility have been extensively studied by many researchers, for natural speech as well as for visual speech synthesis systems driven by text or phonetically transcribed input. Massaro et al. , for example, evaluated visual-only intelligibility of a speaker dependent speech driven system on isolated words. To date, however, we have not seen any published results on speaker independent speech driven facial animation systems, where the intelligibility enhancement (i.e., audiovisual compared to audio-only condition) has been investigated. Below, we report on two experiments were audiovisual intelligibility of SynFace has been evaluated for different configurations and languages.
The framework adopted in SynFace allows for evaluation of the system at different points in the signal chain shown in Figure 2. We can measure accuracy
(i)at the phonetic level, by measuring the phoneme (viseme) accuracy of the speech recogniser,
(ii)at the face parameter level, by computing the distance between the face parameters generated by the system and the optimal trajectories, for example, trajectories obtained from phonetically annotated speech,
(iii)at the intelligibility level, by performing listening tests with hearing impaired subjects, or with normal hearing subjects and a degraded acoustic signal.
The advantage of the first two methods is simplicity. The computations can be performed automatically, if we assume that a good reference is available (phonetically annotated speech). The third method, however, is the most reliable because it tests the effects of the system as a whole.
Evaluating the Phoneme Recogniser
Measuring the performance at the phonetic level can be done in at least two ways: By measuring the percentage of frames that are correctly classified, or by computing the Levenshtein (edit) distance Levenshtein  between the string of phonemes output by the recogniser and the reference transcription. The first method does not explicitly consider the stability of the results in time and, therefore, may overestimate the performance of a recogniser that produces many short insertions. These insertions, however, do not necessarily result in a degradation of the face parameter trajectories, because the articulatory model, the face parameter generation is based on, often acts as a low-pass filter. On the other hand, the Levenshtein distance does not consider the time alignment of the two sequences, and may result in misleading evaluation in the case that two phonetic subsequences that are not co-occurring in time are aligned by mistake. To make the latter measure homogeneous with the correct frames %, we express it in terms of accuracy, defined as , where is Levenshtein (Edit) distance and the length of the reference transcription.
Evaluating the intelligibility is performed by listening tests with a number of hearing impaired or normal hearing subjects. Using normal hearing subject and distorting the audio signal has been shown to be a viable simulation of perception by hearing impaired [35, 36]. The speech material is presented to the subjects in different conditions. These may include audio alone, audio and natural face, audio and synthetic face. In the last case, the synthetic face may be driven by different methods (e.g., different versions of the PR that we want to compare). It may also be driven by carefully obtained annotations of the speech material, if the aim is to test the effects of the visual synthesis models alone.
Two listening test methods have been used in the current experiments. The first method is based on a set of carefully designed short sentences containing a number of key-words. The subject's task is to repeat the sentences, and intelligibility is measured in terms of correctly recognised key-words. In case of normal hearing subjects, the acoustic signal may be degraded by noise in order to simulate hearing impairment. In the following, we will refer to this methodology as "key-word" test.
The second methodology Hagerman and Kinnefors  relies on the adaptive use of noise to assess the level of intelligibility. Lists of 5 words are presented to the subjects in varying noise conditions. The signal-to-noise ratio (SNR dB) is adjusted during the test until the subject is able to correctly report about 50% of the words. This level of noise is referred to as the Speech Reception Threshold (SRT dB) and indicates the amount of noise the subject is able to tolerate before the intelligibility drops below 50%. Lower values of SRT correspond to better performance (the intelligibility is more robust to noise). We will refer to this methodology as "SRT" test.
SRT Versus Correct Frames %
4.1. SynFace in German
Number of connections in the RNN and correct frames % of the SynFace RNN phonetic classifiers.
Correct frames %
The same synthesis rules used for Swedish are applied to the German system, simply by mapping the phoneme (viseme) inventory of the two languages.
To evaluate the German version of the SynFace system, a small "key-word" intelligibility test was performed. A set of twenty short (4–6 words) sentences from the Göttinger satsztest set , spoken by a male native German speaker, were presented to a group of six normal hearing German listeners. The audio presented to the subjects was degraded in order to avoid ceiling effects, using a 3-channel noise excited vocoder shannon et al. . This type of signal degradation has been used in the previous audio-visual intelligibility experiments Siciliano et al.  and can be viewed as a way of simulating the information reduction experienced by cochlear implant patients. Clean speech was used to drive SynFace. 10 sentences were presented with audio-only and 10 sentences were presented with SynFace support. Subjects were presented with four training sentences before the test started. The listeners were instructed to watch the screen and write down what they perceived.
4.2. Narrow- Versus Wide-Band PR
In the Hearing at Home project, SynFace is employed in a range of applications that include speech signals that are streamed through different media (Telephone, Internet, TV). The signal is often of a higher quality compared to the land-line telephone settings. This opens the possibility for improvements in the signal processing part of the system.
In order to take advantage of the available audio band in these applications, the SynFace recogniser was trained on wide-band speech data from the SpeeCon corpus . SpeeCon contains recordings in several languages and conditions. Only recordings in office settings of Swedish were chosen. The corpus contains word level transcriptions, and annotations for speaker noise, background noise, and filled pauses. As in the SpeechDat training, the silence at the boundaries of every utterance was reduced, in order to improve balance between the number of frames for the silence class and for any other phonetic class. Differently from the SpeechDat training, NALIGN Sjolander  was used in order to create time aligned phonetic transcriptions of the corpus based on the orthographic transcriptions.
The bank of filters, used to compute the MFCCs that are input to the recogniser, was defined in a way that the filters between 0 and 4 kHz coincide with the narrow-band filter-bank definition. Additional filters are added for the upper 4 kHz frequencies offered by the wide-band signal.
Comparison between the SpeechDat telephone quality (TF), SpeeCon narrow-band (NB) and SpeeCon wide-band (WB) recognisers. Results are given in terms of correct frames % for phonemes (ph) and visemes (vi), and accuracy.
Data size (ca. hours)
Correct frames (% ph)
Correct frames (% vi)
Accuracy (% ph)
In order to have a more controlled comparison between the narrow- and the wide-band networks for Swedish, a network was trained on a downsampled (8 kHz) version of the same SpeeCon database. The middle column of Table 3 shows the results for the networks trained and tested on the narrow-band (downsampled) version of the SpeeCon database. Results are shown in terms of % of correct frames for phonemes, visemes, and phoneme accuracy.
SynFace is currently available in Swedish, German, Flemish and English. In order to investigate the possibility of using the current recognition models on new languages, we performed cross-language evaluation tests.
correct frames % for different languages (columns) recognised by different models (rows). The languages are: German (de), English (en), Flemish (fl) and Swedish (sv). Numbers in parentheses are the % of correct frames for perfect recognition, given the mismatch in phonetic inventory across languages.
The second mapping criterion depicted in Figure 6(b), considers as correct the association between model and target phonemes that was most frequently adopted by the recognition models on that particular target language. If we consider all possible maps between the set of model phonemes and the set of target phonemes, this corresponds to an upper bound of the results. Compared to the results in Table 4, this evaluation method gives about 10% increased correct frames in average. In this case, there is no guarantee that the chosen mapping bares phonetic significance.
The purpose of SynFace is to enhance spoken communication for the hearing impaired, rather than solving the acoustic-to-visual speech mapping per se. The methods employed here are, therefore, tailored to achieving this goal in the most effective way. Beskow  showed that, whereas data-driven visual synthesis resulted in more realistic lip movements, the rule-based system enhanced the intelligibility. Similarly, mapping from the acoustic speech directly into visual parameters is an appealing research problem. However, when the ambition is to develop a tool that can be applied in real-life conditions, it is necessary to constrain the problem. The system discussed in this paper
(i)works in real time and with low latency, allowing realistic conditions for a natural spoken communication,
(ii)is light-weight and can be run on standard commercially available hardware,
(iii)is speaker independent, allowing the user to communicate with any person,
(iv)is being developed for different languages (currently, Swedish, English, Flemish, and German are available),
(v)is optimised for different acoustic conditions, ranging from telephone speech quality to wide-band speech available in, for example, Internet communications and radio/TV broadcasting,
(vi)is being extensively evaluated in realistic settings, with hearing impaired subjects or by simulating hearing impairment.
Even though speech intelligibility is the focus of the SynFace system, extra-linguistic aspects of speech communication have also been described in the paper. Modelling nonverbal gestures proved to be a viable way of enhancing the turn-taking mechanism in telephone communication.
Future work will be aimed at increasing the generality of the methods, for example, by studying ways to achieve language independence or by simplifying the process of optimising the system to a new language, based on the preliminary results shown in this paper. Reliably extracting extra-linguistic information, as well as synthesis and evaluation of nonverbal gestures will also be the focus of future work.
The work presented here was funded in part by European Commission Project IST-045089 (Hearing at Home) and Swedish Research Council Project 621-2005-3488 (Modelling multimodal communicative signal and expressive speech for embodied conversational agents).
- Tamura M, Masuko T, Kobayashi T, Tokuda K: Visual speech synthesis based on parameter generation from hmm: speech-driven and text-and-speechdriven approaches. Proceedings of the Auditory-Visual Speech Processing (AVSP '98), 1998 221-226.Google Scholar
- Nakamura S, Yamamoto E: Speech-to-lip movement synthesis by maximizing audio-visual joint probability based on the EM algorithm. Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology 2001,27(1-2):119-126.View ArticleMATHGoogle Scholar
- Wen Z, Hong P, Huang T: Real time speech driven facial animation using formant analysis. Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '01), 2001 817-820.Google Scholar
- Al Moubayed S, De Smet M, Van Hamme H: Lip synchronization:from phone lattice to PCA eigenprojections using neural networks. Proceedings of the Biennial Conference of the International Speech Communication Association (Interspeech '08), 2008, Brisbane, AustraliaGoogle Scholar
- Hofer G, Yamagishi J, Shimodaira H: Speech-driven lip motion generation with a trajectory hmm. Proceedings of the Biennial Conference of the International Speech Communication Association (Interspeech '08), 2008, Brisbane, AustraliaGoogle Scholar
- Ohman T, Salvi G: Using HMMs and ANNs for mapping acoustic to visual speech. TMH-QPSR 1999,40(1-2):45-50.Google Scholar
- Massaro D, Beskow J, Cohen M, Fry C, Rodgriguez T: Picture my voice: audio to visual speech synthesis using artificial neural networks. Proceedings of the International Conference on Auditory-Visual Speech Processing (ISCA '99), 1999Google Scholar
- Ezzat T, Geiger G, Poggio T: Trainable videorealistic speech animation. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, 2002, New York, NY, USA. ACM; 388-398.Google Scholar
- Liu K, Ostermann J: Realistic facial animation system for interactive services. Proceedings of the Biennial Conference of the International Speech Communication Association (Interspeech '08), 2008, Brisbane, AustraliaGoogle Scholar
- Cohen M, Massaro D: Models and techniques in computer animation. In Modeling Coarticulation in Synthetic Visual Speech. Volume 92. Springer, Tokyo, Japan; 1993.Google Scholar
- Železný M, Krňoul Z, Císař P, Matoušek J: Design, implementation and evaluation of the Czech realistic audio-visual speech synthesis. Signal Processing 2006,86(12):3657-3673. 10.1016/j.sigpro.2006.02.039View ArticleMATHGoogle Scholar
- Theobald BJ, Bangham JA, Matthews IA, Cawley GG: Near-videorealistic synthetic talking faces: implementation and evaluation. Speech Communication 2004,44(1–4):127-140.View ArticleGoogle Scholar
- Kitawaki N, Itoh K: Pure delay effects on speech quality in telecommunications. IEEE Journal on Selected Areas in Communications 1991,9(4):586-593. 10.1109/49.81952View ArticleGoogle Scholar
- McGrath M, Summerfield Q: Intermodal timing relations and audio-visual speech recognition by normal-hearing adults. Journal of the Acoustical Society of America 1985,77(2):678-685. 10.1121/1.392336View ArticleGoogle Scholar
- Parke FI: Parameterized models for facial animation. IEEE Computer Graphics and Applications 1982,2(9):61-68.View ArticleGoogle Scholar
- Beskow J: Rule-based visual speech synthesis. Proceedings of the European Conference on Speech Communication and Technology (Eurospeech '95), 1995, Madrid, Spain 299-302.Google Scholar
- Gjermani T: Integration of an animated talking face model in a portable device for multimodal speech synthesis, M.S. thesis. Department for Speech, Music and Hearing, KTH, School of Computer Science and Communication, Stockholm, Sweden; 2008.Google Scholar
- Beskow J: Trainable articulatory control models for visual speech synthesis. International Journal of Speech Technology 2004,7(4):335-349.View ArticleGoogle Scholar
- Sjolander K: An HMM-based system for automatic segmentation and alignment of speech. Proceedings of Fonetik, 2003, Umeå, Sweden 93-96.Google Scholar
- Salvi G: Truncation error and dynamics in very low latency phonetic recognition. Proceedings of the Non Linear Speech Processing (NOLISP '03), 2003, Le Croisic, FranceGoogle Scholar
- Robinson AJ: Application of recurrent nets to phone probability estimation. IEEE Transactions on Neural Networks 1994,5(2):298-305. 10.1109/72.279192View ArticleGoogle Scholar
- Werbos PJ: Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 1990,78(10):1550-1560. 10.1109/5.58337View ArticleGoogle Scholar
- Bourlard H, Morgan N: Continuous speech recognition by connectionist statistical methods. IEEE Transactions on Neural Networks 1993,4(6):893-909. 10.1109/72.286885View ArticleGoogle Scholar
- Strom N: Development of a recurrent time-delay neural net speech recognition system. TMH-QPSR 1992,26(4):1-15.Google Scholar
- Elenius K: Experiences from collecting two Swedish telephone speech databases. International Journal of Speech Technology 2000,3(2):119-127. 10.1023/A:1009641213324View ArticleGoogle Scholar
- Salvi G: Dynamic behaviour of connectionist speech recognition with strong latency constraints. Speech Communication 2006,48(7):802-818. 10.1016/j.specom.2005.05.005View ArticleGoogle Scholar
- Munhall KG, Jones JA, Callan DE, Kuratate T, Vatikiotis-Bateson E: Visual prosody and speech intelligibility: head movement improves auditory speech perception. Psychological Science 2004,15(2):133-137. 10.1111/j.0963-7214.2004.01502010.xView ArticleGoogle Scholar
- Cao Y, Tien WC, Faloutsos P, Pighin F: Expressive speech-driven facial animation. ACM Transactions on Graphics 2005,24(4):1283-1302. 10.1145/1095878.1095881View ArticleGoogle Scholar
- Hadar U, Steiner TJ, Grant EC, Rose FC: Kinematics of head movements accompanying speech during conversation. Human Movement Science 1983,2(1-2):35-46. 10.1016/0167-9457(83)90004-0View ArticleGoogle Scholar
- Beskow J, Granström B, House D: Analysis and synthesis of multimodal verbal and non-verbal interaction for animated interface agents. Proceedings of the International Workshop on Verbal and Nonverbal Communication Behaviours, 2007, Lecture Notes in Computer Science 4775: 250-263.View ArticleGoogle Scholar
- Edlund J, Beskow J: Pushy versus meek-using avatars to in uence turn-taking behaviour. Proceedings of the Biennial Conference of the International Speech Communication Association (Interspeech '07), 2007, Antwerp, BelgiumGoogle Scholar
- Kendon A: Some functions of gaze-direction in social interaction. Acta Psychologica 1967,26(1):22-63.View ArticleGoogle Scholar
- Hugot V: Eye gaze analysis in human-human commuication, M.S. thesis. Department for Speech, Music and Hearing, KTH, School of Computer Science and Communication, Stockholm, Sweden; 2007.Google Scholar
- Levenshtein VI: Binary codes capable of correcting deletions, insertions and reversals. Soviet Physics Doklady 1966, 10: 707.MathSciNetGoogle Scholar
- Humes LE, Espinoza-Varas B, Watson CS: Modeling sensorineural hearing loss. I. Model and retrospective evaluation. Journal of the Acoustical Society of America 1988,83(1):188-202. 10.1121/1.396420View ArticleGoogle Scholar
- Humes LE, Jesteadt W: Models of the effects of threshold on loudness growth and summation. Journal of the Acoustical Society of America 1991,90(4):1933-1943. 10.1121/1.401673View ArticleGoogle Scholar
- Hagerman B, Kinnefors C: Efficient adaptive methods for measuring speech reception threshold in quiet and in noise. Scandinavian Audiology 1995,24(1):71-77. 10.3109/01050399509042213View ArticleGoogle Scholar
- Lindberg B, Johansen FT, Warakagoda N, et al.: A noise robust multilingual reference recogniser based on SpeechDat(II). Proceedings of the International Conference on Spoken Language Processing (ICSLP '00), 2000Google Scholar
- Wesselkamp M: Messung und modellierung der verstandlichkeit von sprache, Ph.D. thesis. Universitat Gottingen; 1994.Google Scholar
- Shannon RV, Zeng F-G, Kamath V, Wygonski J, Ekelid M: Speech recognition with primarily temporal cues. Science 1995,270(5234):303-304. 10.1126/science.270.5234.303View ArticleGoogle Scholar
- Siciliano C, Williams G, Beskow J, Faulkner A: Evaluation of a multilingual synthetic talking face as a communication aid for the hearing impaired. Proceedings of the 15th International Conference of Phonetic Sciences (ICPhS '03), 2003, Barcelona, Spain 131-134.Google Scholar
- Iskra D, Grosskopf B, Marasek K, Heuvel HVD, Diehl F, Kiessling A: Speecon—speech databases for consumer devices: database specification and validation. Proceedings of the International Conference on Language Resources and Evaluation (LREC '02), 2002 329-333.Google Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.