Skip to main content


We're creating a new version of this page. See preview

  • Research Article
  • Open Access

Detection and Separation of Speech Events in Meeting Recordings Using a Microphone Array

  • 1Email author,
  • 1,
  • 1,
  • 2 and
  • 2
EURASIP Journal on Audio, Speech, and Music Processing20072007:027616

  • Received: 2 November 2006
  • Accepted: 19 April 2007
  • Published:


When applying automatic speech recognition (ASR) to meeting recordings including spontaneous speech, the performance of ASR is greatly reduced by the overlap of speech events. In this paper, a method of separating the overlapping speech events by using an adaptive beamforming (ABF) framework is proposed. The main feature of this method is that all the information necessary for the adaptation of ABF, including microphone calibration, is obtained from meeting recordings based on the results of speech-event detection. The performance of the separation is evaluated via ASR using real meeting recordings.


  • Acoustics
  • Speech Recognition
  • Automatic Speech Recognition
  • Spontaneous Speech
  • Microphone Array


Authors’ Affiliations

Information Technology Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba Central 2, 1-1-1 Umezono, Tsukuba 305-8568, Japan
Advanced Media, Inc., 48F Sunshine 60 Building, 3-1-1 Higashi-Ikebukuro, Toshima-Ku, Tokyo 170-6048, Japan


  1. Moore DC, McCowan IA: Microphone array speech recognition: experiments on overlapping speech in meetings. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '03), April 2003, Hong Kong 5: 497-500.Google Scholar
  2. Dielmann A, Renals S: Dynamic Bayesian networks for meeting structuring. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '04), May 2004, Montreal, Que, Canada 5: 629-632.Google Scholar
  3. Ajmera J, Lathoud G, McCowan I: Clustering and segmenting speakers and their locations in meetings. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '04), May 2004, Montreal, Que, Canada 1: 605-608.Google Scholar
  4. Katoh M, Yamamoto K, Ogata J, et al.: State estimation of meetings by information fusion using Bayesian network. Proceedings of the 9th European Conference on Speech Communication and Technology, September 2005, Lisbon, Portugal 113-116.Google Scholar
  5. Hain T, Dines J, Garau G, et al.: Transcription of conference room meetings: an investigation. Proceedings of the 9th European Conference on Speech Communication and Technology (EUROSPEECH '05), September 2005, Lisbon, Portugal 1661-1664.Google Scholar
  6. Haykin S (Ed): Unsupervised Adaptive Filtering, Vol. 1. John Wiley & Sons, New York, NY, USA; 2000.Google Scholar
  7. Johnson DH, Dudgeon DE: Array Signal Processing. Prentice-Hall, Englewood Cliffs, NJ, USA; 1993.MATHGoogle Scholar
  8. Hoshuyama O, Sugiyama A, Hirano A: A robust adaptive beamformer for microphone arrays with a blocking matrix using constrained adaptive filters. IEEE Transactions on Signal Processing 1999,47(10):2677-2684. 10.1109/78.790650View ArticleGoogle Scholar
  9. Oak P, Kellermann W: A calibration method for robust generalized sidelobe cancelling beamformers. Proceedings of International Workshop on Acoustic Echo and Noise Control (IWAENC '05), September 2005, Eindhoven, The Netherlands 97-100.Google Scholar
  10. Gannot S, Cohen I: Speech enhancement based on the general transfer function GSC and postfiltering. IEEE Transactions on Speech and Audio Processing 2004,12(6):561-571. 10.1109/TSA.2004.834599View ArticleGoogle Scholar
  11. Asano F, Hayamizu S, Yamada T, Nakamura S: Speech enhancement based on the subspace method. IEEE Transactions on Speech and Audio Processing 2000,8(5):497-507. 10.1109/89.861364View ArticleGoogle Scholar
  12. Asano F, Yamamoto K, Hara I, et al.: Detection and separation of speech event using audio and video information fusion and its application to robust speech interface. EURASIP Journal on Applied Signal Processing 2004,2004(11):1727-1738. 10.1155/S1110865704402303View ArticleGoogle Scholar
  13. Asano F, Ogata J: Detection and separation of speech events in meeting recordings. Proceedings of the 9th International Conference on Spoken Language Processing (ICSLP '06), September 2006, Pittsburgh, Pa, USA 2586-2589.Google Scholar
  14. Asano F, Ikeda S, Ogawa M, Asoh H, Kitawaki N: Combined approach of array processing and independent component analysis for blind separation of acoustic signals. IEEE Transactions on Speech and Audio Processing 2003,11(3):204-215. 10.1109/TSA.2003.809191View ArticleGoogle Scholar
  15. Schmidt RO: Multiple emitter location and signal parameter estimation. IEEE Transactions on Antennas and Propagation 1986,34(3):276-280. 10.1109/TAP.1986.1143830View ArticleGoogle Scholar
  16. Suzuki Y, Asano F, Kim H-Y, Sone T: An optimum computer-generated pulse signal suitable for the measurement of very long impulse responses. Journal of the Acoustical Society of America 1995,97(2):1119-1123. 10.1121/1.412224View ArticleGoogle Scholar
  17. Leggetter CJ, Woodland PC: Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models. Computer Speech and Language 1995,9(2):171-185. 10.1006/csla.1995.0010View ArticleGoogle Scholar
  18. Gauvain J-L, Lee C-H: Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains. IEEE Transactions on Speech and Audio Processing 1994,2(2):291-298. 10.1109/89.279278View ArticleGoogle Scholar


© Futoshi Asano et al. 2007

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.