Abstract
Nowadays, the presence of virtual characters is less and less surprising in daily life. However, there is a lack of resources and tools available in the area of visual speech technologies for minority languages. In this paper we present an application to animate in real time virtual characters from live speech in Basque. To get a realistic face animation, the lips must be synchronized with the audio. To accomplish this, we have compared different methods for obtaining the final visemes through HMM based speech recognition techniques. Finally, the implementation of a real prototype has proven the feasibility to obtain a quite natural animation in real time with a minimum amount of training data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Ezzat, T., Poggio, T.: MikeTalk: A Talking Facial Display Based on Morphing Visemes. In: Proc. Computer Animation Conference, Pennsylvania (1998)
Hill, D., Pearce, A., Wyvill, B.: Animating speech: an automated approach using speech synthesis by rules. The Visual Computer 3, 277–289 (1988)
Magnenat-Thalmann, N., Primeau, E., Thalmann, D.: Abstract muscle action procedures for human face animation. The Visual Computer 3, 290–297 (1988)
Lewis, J., Parke, F.: Automated lip-synch and speech synthesis for character animation. In: Proc. CHI 1987, Toronto, pp. 143–147. ACM, New York (1980)
Goldenthal, W., Waters, K., Van Thong, J.M., Glickman, O.: Driving Synthetic Mouth Gestures: Phonetic Recognition for FaceMe. In: Proc. Eurospeech, Rhodes, Greece (1997)
Massaro, D., Beskow, S., Cohen, M., Fry, C., Rodriguez, T.: Picture My Voice: Audio to Visual Speech Synthesis using Artificial Neural Networks. In: AVSP, Santa Cruz, California (1999)
Young, S., Kershaw, D., Odell, J., Ollason, D., Valtchev, V., Woodland, P.: The HTK Book, http://htk.eng.cam.ac.uk/
Young, S.: The ATK Real-Time API for HTK, http://htk.eng.cam.ac.uk/
Lee, S., Yook, D.: Viseme Recognition Experiment Using Context Dependent Hidden Markov Models. In: Yin, H., Allinson, N.M., Freeman, R., Keane, J.A., Hubbard, S. (eds.) IDEAL 2002. LNCS, vol. 2412, p. 557. Springer, Heidelberg (2002)
Dongmei, J., Lei, X., Rongchun, Z., Verhelst, W., Ravyse, I., Sahli, H.: Acoustic viseme modelling for speech driven animation: a case study. In: MPCA, Leuven, Belgium (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lehr, M., Arruti, A., Ortiz, A., Oyarzun, D., Obach, M. (2006). Speech Driven Facial Animation Using HMMs in Basque. In: Sojka, P., Kopeček, I., Pala, K. (eds) Text, Speech and Dialogue. TSD 2006. Lecture Notes in Computer Science(), vol 4188. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11846406_52
Download citation
DOI: https://doi.org/10.1007/11846406_52
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-39090-9
Online ISBN: 978-3-540-39091-6
eBook Packages: Computer ScienceComputer Science (R0)