Open Access

Significance of Joint Features Derived from the Modified Group Delay Function in Speech Processing

EURASIP Journal on Audio, Speech, and Music Processing20062007:079032

DOI: 10.1155/2007/79032

Received: 1 April 2006

Accepted: 10 October 2006

Published: 20 December 2006

Abstract

This paper investigates the significance of combining cepstral features derived from the modified group delay function and from the short-time spectral magnitude like the MFCC. The conventional group delay function fails to capture the resonant structure and the dynamic range of the speech spectrum primarily due to pitch periodicity effects. The group delay function is modified to suppress these spikes and to restore the dynamic range of the speech spectrum. Cepstral features are derived from the modified group delay function, which are called the modified group delay feature (MODGDF). The complementarity and robustness of the MODGDF when compared to the MFCC are also analyzed using spectral reconstruction techniques. Combination of several spectral magnitude-based features and the MODGDF using feature fusion and likelihood combination is described. These features are then used for three speech processing tasks, namely, syllable, speaker, and language recognition. Results indicate that combining MODGDF with MFCC at the feature level gives significant improvements for speech recognition tasks in noise. Combining the MODGDF and the spectral magnitude-based features gives a significant increase in recognition performance of 11% at best, while combining any two features derived from the spectral magnitude does not give any significant improvement.

[123456789101112131415161718192021222324252627282930313233343536373839404142434445]

Authors’ Affiliations

(1)
Department of Electrical and Computer Engineering, University of California San Diego
(2)
Department of Computer Science and Engineering, Indian Institute of Technology Madras
(3)
STAR Lab, SRI International

References

  1. Rabiner LR, Juang BH: Fundamentals of Speech Recognition. Prentice Hall, Englewood Cliffs, NJ, USA; 1993.MATHGoogle Scholar
  2. Aikawa K, Singer H, Kawahara H, Tohkura Y: A dynamic cepstrum incorporating time-frequency masking and its application to continuous speech recognition. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '93), April 1993, Minneapolis, Minn, USA 2: 668-671.Google Scholar
  3. Bacchiani M, Aikawa K: Optimization of time-frequency masking filters using the minimum classification error criterion. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '94), April 1994, Adelaide, SA, Australia 2: 197-200.Google Scholar
  4. Hermansky H: Perceptual linear predictive (PLP) analysis of speech. Journal of the Acoustical Society of America 1990,87(4):1738-1752. 10.1121/1.399423View ArticleGoogle Scholar
  5. Ghitza O: Auditory models and human performance in tasks related to speech coding and speech recognition. IEEE Transactions on Speech and Audio Processing 1994,2(1, part 2):115-132. 10.1109/89.260357View ArticleGoogle Scholar
  6. Payton KL: Vowel processing by a model of the auditory periphery: a comparison to eighth-nerve responses. The Journal of the Acoustical Society of America 1988,83(1):145-162. 10.1121/1.396441MathSciNetView ArticleGoogle Scholar
  7. Lyon R: A computational model of filtering, detection, and compression in the cochlea. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '82), May 1982, Paris, France 7: 1282-1285.View ArticleGoogle Scholar
  8. Seneff S: A joint synchrony/mean-rate model of auditory speech processing. Journal of Phonetics 1988,16(1):55-76.Google Scholar
  9. Cohen JR: Application of an auditory model to speech recognition. The Journal of the Acoustical Society of America 1989,85(6):2623-2629. 10.1121/1.397756View ArticleGoogle Scholar
  10. Hunt MJ, Richardson SM, Bateman DC, Piau A: An investigation of PLP and IMELDA acoustic representations and of their potential for combination. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '91), May 1991, Toronto, Ont, Canada 2: 881-884.Google Scholar
  11. Davis SB, Mermelstein P: Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Transactions on Acoustics, Speech, and Signal Processing 1980,28(4):357-366. 10.1109/TASSP.1980.1163420View ArticleGoogle Scholar
  12. Paliwal KK, Alsteris LD: On the usefulness of STFT phase spectrum in human listening tests. Speech Communication 2005,45(2):153-170. 10.1016/j.specom.2004.08.001View ArticleGoogle Scholar
  13. Alsteris LD, Paliwal KK: Some experiments on iterative reconstruction of speech from STFT phase and magnitude spectra. Proceedings of 9th European Conference on Speech Communication and Technology (EUROSPEECH '05), September 2005, Lisbon, Portugal 337-340.Google Scholar
  14. Murthy HA, Gadde VRR: The modified group delay function and its application to phoneme recognition. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '03), April 2003, Hong Kong 1: 68-71.Google Scholar
  15. Hegde RM, Murthy HA, Gadde VRR: Application of the modified group delay function to speaker identification and discrimination. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '04), 2004, Montreal, Quebec, Canada 1: 517-520.
  16. Hegde RM, Murthy HA, Gadde VRR: Continuous speech recognition using joint features derived from the modified group delay function and MFCC. Proceedings of 8th International Conference on Spoken Language Processing (INTERSPEECH '04), October 2004, Jeju Island, Korea 2: 905-908.Google Scholar
  17. Hegde RM, Murthy HA, Gadde VRR: The modified group delay feature: a new spectral representation of speech. Proceedings of 8th International Conference on Spoken Language Processing (INTERSPEECH '04), October 2004, Jeju Island, Korea 2: 913-916.Google Scholar
  18. Hegde RM, Murthy HA, Gadde VRR: Significance of the modified group delay feature in speech recognition. to appear in IEEE Transactions on Speech and Audio Processing
  19. Hegde RM, Murthy HA, Gadde VRR: Speech processing using joint features derived from the modified group delay function. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '05), March 2005, Philadelphia, Pa, USA 1: 541-544.Google Scholar
  20. Okawa S, Bocchieri E, Potamianos A: Multi-band speech recognition in noisy environments. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '98), May 1998, Seattle, Wash, USA 2: 641-644.Google Scholar
  21. Ellis D: Feature stream combination before and/or after the acoustic model. In Tech. Rep. TR-00-007. International Computer Science Institute, Berkeley, Calif, USA; 2000.Google Scholar
  22. Christensen H: Speech recognition using heterogenous information extraction in multi-stream based systems, Ph.D. dissertation.
  23. Kingsbury BED, Morgan N: Recognizing reverberant speech with RASTA-PLP. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '97), April 1997, Munich, Germany 2: 1259-1262.Google Scholar
  24. Wu S-L, Kingsbury BED, Morgan N, Greenberg S: Incorporating information from syllable-length time scales intoautomatic speech recognition. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '98), May 1998, Seattle, Wash, USA 2: 721-724.Google Scholar
  25. Janin A, Ellis D, Morgan N: Multi-stream speech recognition: ready for prime time? Proceedings of 6th European Conference on Speech Communication and Technology (EUROSPEECH '99), September 1999, Budapest, Hungary 591-594.Google Scholar
  26. Kirchhoff K, Bilmes JA: Dynamic classifier combination in hybrid speech recognition systems using utterance-level confidence values. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '99), March 1999, Phoenix, Ariz, USA 2: 693-696.View ArticleGoogle Scholar
  27. Database for Indian Languages. Speech and Vision Lab, IIT Madras, Chennai, India; 2001.
  28. NTIS : The DARPA TIMIT Acoustic-Phonetic Continuous Speech Corpus. 1993.Google Scholar
  29. Jankowski C, Kalyanswamy A, Basson S, Spitz J: NTIMIT : a phonetically balanced, continuous speech, telephone bandwidth speech database. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '90), April 1990, Albuquerque, NM, USA 1: 109-112.Google Scholar
  30. Besacier L, Bonastre JF: Time and frequency pruning for speaker identification. Proceedings of the 14th International Conference on Pattern Recognition (ICPR '98), August 1998, Brisbane, Qld., Australia 2: 1619-1621.Google Scholar
  31. Brown KL, George EB: CTIMIT: a speech corpus for the cellular environment with applications to automatic speech recognition. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '95), May 1995, Detroit, Mich, USA 1: 105-108.Google Scholar
  32. Muthusamy YK, Cole RA, Oshika BT: The OGI multi-language telephone speech corpus. Proceedings of the 2nd International Conference on Spoken Language Processing (ICSLP '92), October 1992, Banff, Alberta, Canada 895-898.Google Scholar
  33. Turner K: Linear and order statistics combiners for reliable pattern classification, Ph.D. dissertation.
  34. Perrone MP, Cooper LN: When networks disagree: ensemble methods for hybrid neural networks. In Neural Networks for Speech and Image Processing. Chapman-Hall, London, UK; 1993:126-142.Google Scholar
  35. Sarikaya R, Hansen JHL: Analysis of the root-cepstrum for acoustic modeling and fast decoding in speech recognition. Proceedings of the 7th European Conference on Speech Communication and Technology (EUROSPEECH '01), September 2001, Aalborg, Denmark 687-690.Google Scholar
  36. Krogh A, Vedelsby J: Neural network ensembles, cross validation, and active learning. In Advances in Neural Information Processing Systems. Volume 7. MIT Press, Cambridge, Mass, USA; 1995:231-238.Google Scholar
  37. Murthy HA, Yegnanarayana B: Formant extraction from group delay function. Speech Communication 1991,10(3):209-221. 10.1016/0167-6393(91)90011-HView ArticleGoogle Scholar
  38. Yegnanarayana B, Saikia DK, Krishnan TR: Significance of group delay functions in signal reconstruction from spectral magnitude or phase. IEEE Transactions on Acoustics, Speech, and Signal Processing 1984,32(3):610-623. 10.1109/TASSP.1984.1164365View ArticleGoogle Scholar
  39. Prasad VK, Nagarajan T, Murthy HA: Automatic segmentation of continuous speech using minimum phase group delay functions. Speech Communication 2004,42(3-4):429-446. 10.1016/j.specom.2003.12.002View ArticleGoogle Scholar
  40. Yegnanarayana B, Murthy HA: Significance of group delay functions in spectrum estimation. IEEE Transactions on Signal Processing 1992,40(9):2281-2289. 10.1109/78.157227View ArticleMATHGoogle Scholar
  41. Yip P, Rao KR: Discrete Cosine Transform: Algorithms, Advantages, and Applications. Academic Press, San Diego, Calif, USA; 1997.MATHGoogle Scholar
  42. Acero A: Acoustical and environmental robustness in automatic speech recognition, Ph.D. dissertation.
  43. Murthy HA, Beaufays F, Heck LP, Weintraub M: Robust text-independent speaker identification over telephone channels. IEEE Transactions on Speech and Audio Processing 1999,7(5):554-568. 10.1109/89.784108View ArticleGoogle Scholar
  44. Alexandre P, Lockwood P: Root cepstral analysis: a unified view. Application to speech processing in car noise environments. Speech Communication 1993,12(3):277-288. 10.1016/0167-6393(93)90099-7View ArticleGoogle Scholar
  45. Gadde VRR, Stolcke A, Vergyri JZD, Sonmez K, Venkatraman A: The SRI SPINE 2001 Evaluation System. SRI: Menlo Park, Calif, USA, 2001Google Scholar

Copyright

© Rajesh M. Hegde et al. 2007

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.