Abstract
The extraction of information from recorded meetings is a very important yet challenging task. The problem lies in the inability of speech recognition systems to be directly applied onto meeting speech data, mainly because meeting participants speak concurrently and head-mounted microphones record more than just their wearers’ utterances – crosstalk from his neighbours are inevitably recorded as well. As a result, a degree of preprocessing of these recordings is needed. The current work presents an approach to segment meetings into four audio classes: Single speaker, crosstalk, single speaker plus crosstalk and silence. For this purpose, we propose Two-Layer Cascaded Subband Filters, which spread according to the pitch and formant frequency scales. This filters are able to detect the presence or absence of pitch and formants in an audio signal. In addition, the filters can determine how many numbers of pitches and formants are present in an audio signal based on the output subband energies. Experiments conducted on the ICSI meeting corpus, show that although an overall recognition rate of up to 57% was achieved, rates for crosstalk and silence classes are as high as 80%. This indicates the positive effect and potential of this subband feature in meeting segmentation tasks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Dielmann, A., Renals, S.: Multistream Dynamic Bayesian Network for Meeting Segmentation. In: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2006), Toulouse, France, May 14-19 (2006)
Wrigley, S.N., Brown, G.J., Wan, V., Renals, S.: Speech and Crosstalk Detection in Multichannel Audio. IEEE Transactions on Speech and Audio Processing 13(1) (January 2005)
Janin, A., Baron, D., Edwards, J., Ellis, D., Gelbart, D., Morgan, N., Peskin, B., Pfau, T., Shriberg, E., Stolcke, A., Wooters, C.: The ICSI Meeting Corpus. In: Proc. ICASSP, pp. 364–367 (2003)
Wang, X., Pools, L.C.W., ten Bosch, L.F.M.: Analysis of Context- Dependent Segmental Duration for Automatic Speech Recognition. In: International Conference on Spoken Language Processing (ICSLP), pp. 1181–1184 (1996)
Klatt, D.H.: Software for a Cascade/Parallel Formant Synthesizer. J. Acoust. Soc. Am. 67, 971–995 (1980)
Li, H., Nwe, T.L.: Vibrato-Motivated Acoustic Features for Singer Identification. In: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, Toulouse, France, May 14-19 (2006)
Rabiner, L.R., Juang, B.H.: Fundamentals of Speech Recognition. Prentice Hall, Englewood Cliffs (1993)
Fant, G.: Speech Sounds and Features. MIT Press, Cambridge (1973)
Becchetti, C., Ricotti, L.P.: Speech Recognition Theory and C++ Implementation. John Wiley & Sons, New York (1998)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Giuliani, M., Nwe, T.L., Li, H. (2006). Meeting Segmentation Using Two-Layer Cascaded Subband Filters. In: Huo, Q., Ma, B., Chng, ES., Li, H. (eds) Chinese Spoken Language Processing. ISCSLP 2006. Lecture Notes in Computer Science(), vol 4274. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11939993_68
Download citation
DOI: https://doi.org/10.1007/11939993_68
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-49665-6
Online ISBN: 978-3-540-49666-3
eBook Packages: Computer ScienceComputer Science (R0)