Skip to main content

Detection and Separation of Speech Events in Meeting Recordings Using a Microphone Array

Abstract

When applying automatic speech recognition (ASR) to meeting recordings including spontaneous speech, the performance of ASR is greatly reduced by the overlap of speech events. In this paper, a method of separating the overlapping speech events by using an adaptive beamforming (ABF) framework is proposed. The main feature of this method is that all the information necessary for the adaptation of ABF, including microphone calibration, is obtained from meeting recordings based on the results of speech-event detection. The performance of the separation is evaluated via ASR using real meeting recordings.

[123456789101112131415161718]

References

  1. 1.

    Moore DC, McCowan IA: Microphone array speech recognition: experiments on overlapping speech in meetings. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '03), April 2003, Hong Kong 5: 497-500.

    Google Scholar 

  2. 2.

    Dielmann A, Renals S: Dynamic Bayesian networks for meeting structuring. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '04), May 2004, Montreal, Que, Canada 5: 629-632.

    Google Scholar 

  3. 3.

    Ajmera J, Lathoud G, McCowan I: Clustering and segmenting speakers and their locations in meetings. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '04), May 2004, Montreal, Que, Canada 1: 605-608.

    Google Scholar 

  4. 4.

    Katoh M, Yamamoto K, Ogata J, et al.: State estimation of meetings by information fusion using Bayesian network. Proceedings of the 9th European Conference on Speech Communication and Technology, September 2005, Lisbon, Portugal 113-116.

    Google Scholar 

  5. 5.

    Hain T, Dines J, Garau G, et al.: Transcription of conference room meetings: an investigation. Proceedings of the 9th European Conference on Speech Communication and Technology (EUROSPEECH '05), September 2005, Lisbon, Portugal 1661-1664.

    Google Scholar 

  6. 6.

    Haykin S (Ed): Unsupervised Adaptive Filtering, Vol. 1. John Wiley & Sons, New York, NY, USA; 2000.

    Google Scholar 

  7. 7.

    Johnson DH, Dudgeon DE: Array Signal Processing. Prentice-Hall, Englewood Cliffs, NJ, USA; 1993.

    Google Scholar 

  8. 8.

    Hoshuyama O, Sugiyama A, Hirano A: A robust adaptive beamformer for microphone arrays with a blocking matrix using constrained adaptive filters. IEEE Transactions on Signal Processing 1999,47(10):2677-2684. 10.1109/78.790650

    Article  Google Scholar 

  9. 9.

    Oak P, Kellermann W: A calibration method for robust generalized sidelobe cancelling beamformers. Proceedings of International Workshop on Acoustic Echo and Noise Control (IWAENC '05), September 2005, Eindhoven, The Netherlands 97-100.

    Google Scholar 

  10. 10.

    Gannot S, Cohen I: Speech enhancement based on the general transfer function GSC and postfiltering. IEEE Transactions on Speech and Audio Processing 2004,12(6):561-571. 10.1109/TSA.2004.834599

    Article  Google Scholar 

  11. 11.

    Asano F, Hayamizu S, Yamada T, Nakamura S: Speech enhancement based on the subspace method. IEEE Transactions on Speech and Audio Processing 2000,8(5):497-507. 10.1109/89.861364

    Article  Google Scholar 

  12. 12.

    Asano F, Yamamoto K, Hara I, et al.: Detection and separation of speech event using audio and video information fusion and its application to robust speech interface. EURASIP Journal on Applied Signal Processing 2004,2004(11):1727-1738. 10.1155/S1110865704402303

    Article  Google Scholar 

  13. 13.

    Asano F, Ogata J: Detection and separation of speech events in meeting recordings. Proceedings of the 9th International Conference on Spoken Language Processing (ICSLP '06), September 2006, Pittsburgh, Pa, USA 2586-2589.

    Google Scholar 

  14. 14.

    Asano F, Ikeda S, Ogawa M, Asoh H, Kitawaki N: Combined approach of array processing and independent component analysis for blind separation of acoustic signals. IEEE Transactions on Speech and Audio Processing 2003,11(3):204-215. 10.1109/TSA.2003.809191

    Article  Google Scholar 

  15. 15.

    Schmidt RO: Multiple emitter location and signal parameter estimation. IEEE Transactions on Antennas and Propagation 1986,34(3):276-280. 10.1109/TAP.1986.1143830

    Article  Google Scholar 

  16. 16.

    Suzuki Y, Asano F, Kim H-Y, Sone T: An optimum computer-generated pulse signal suitable for the measurement of very long impulse responses. Journal of the Acoustical Society of America 1995,97(2):1119-1123. 10.1121/1.412224

    Article  Google Scholar 

  17. 17.

    Leggetter CJ, Woodland PC: Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models. Computer Speech and Language 1995,9(2):171-185. 10.1006/csla.1995.0010

    Article  Google Scholar 

  18. 18.

    Gauvain J-L, Lee C-H: Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains. IEEE Transactions on Speech and Audio Processing 1994,2(2):291-298. 10.1109/89.279278

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Futoshi Asano.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Asano, F., Yamamoto, K., Ogata, J. et al. Detection and Separation of Speech Events in Meeting Recordings Using a Microphone Array. J AUDIO SPEECH MUSIC PROC. 2007, 027616 (2007). https://doi.org/10.1155/2007/27616

Download citation

Keywords

  • Acoustics
  • Speech Recognition
  • Automatic Speech Recognition
  • Spontaneous Speech
  • Microphone Array