B. H. Juang, L. R. Rabiner, Hidden Markov models for speech recognition. Technometrics. 33(3), 251–272 (1991).
Article
MathSciNet
Google Scholar
B. -H. Juang, L. R. Rabiner, Automatic speech recognition–a brief history of the technology development, vol. 1 (Georgia Institute of Technology. Atlanta Rutgers University and the University of California, Santa Barbara, 2005).
Google Scholar
L. Deng, G. Hinton, B. Kingsbury, in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. New types of deep neural network learning for speech recognition and related applications: an overview (IEEE, 2013), pp. 8599–8603.
G. E. Dahl, D. Yu, L. Deng, A. Acero, Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Process. 20(1), 30–42 (2011).
Article
Google Scholar
G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. -r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al, Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Proc. Mag.29(6), 82–97 (2012).
Article
Google Scholar
O. Abdel-Hamid, A. -r. Mohamed, H. Jiang, L. Deng, G. Penn, D. Yu, Convolutional neural networks for speech recognition. IEEE/ACM Trans. Audio Speech Lang. Process. 22(10), 1533–1545 (2014).
Article
Google Scholar
H. Sak, A. Senior, F. Beaufays, Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014).
A. Graves, N. Jaitly, A. -r. Mohamed, in 2013 IEEE Workshop on Automatic Speech Recognition and Understanding. Hybrid speech recognition with deep bidirectional LSTM (IEEE, 2013), pp. 273–278.
H. Wang, K. Khyuru, J Li, G. Li, J. Dang, L. Huang, in Proc. APSIPA ASC. Investigation on acoustic modeling with different phoneme set for continuous Lhasa Tibetan recognition based on DNN method, (2016).
J. Li, H. Wang, L. Wang, J. Dang, K. Khuru, G. Lobsang, in Proc. ISCSLP. Exploring tonal information for lhasa dialect acoustic modeling, (2016).
D. Povey, V. Peddinti, D. Galvez, P. Ghahrmani, V. Manohar, X. Na, Y. Wang, S. Khudanpur, in Proc. INTERSPEECH. Purely sequence-trained neural networks for ASR based on lattice-free MMI, (2016).
J. Yan, Z. Lv, S. Huang, H. Yu, in Proc. The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages. Low-resource Tibetan dialect acoustic modeling based on transfer learning, (2018).
J. Chorowski, D. Bahdanau, K. Cho, Y. Bengio, End-to-end continuous speech recognition using attention-based recurrent nn: first results. arXiv preprint arXiv:1412.1602 (2014).
W. Chan, N. Jaitly, Q. Le, O. Vinyals, in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Listen, attend and spell: a neural network for large vocabulary conversational speech recognition (IEEE, 2016), pp. 4960–4964.
S. Watanabe, T. Hori, S. Kim, J. R. Hershey, T. Hayashi, Hybrid ctc/attention architecture for end-to-end speech recognition. IEEE J. Sel. Top. Sig. Process. 11(8), 1240–1253 (2017).
Article
Google Scholar
A. Graves, A. -r. Mohamed, G. Hinton, in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. Speech recognition with deep recurrent neural networks (IEEE, 2013), pp. 6645–6649.
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, I. Polosukhin, in Advances in Neural Information Processing Systems. Attention is all you need, (2017), pp. 5998–6008.
A. Graves, N. Jaitly, in Proc. ICML. Towards end-to-end speech recognition with recurrent neural networks, (2014).
Y. Miao, M. Gowayyed, F. Metze, in Proc. IEEE-ASRU. EESEN: end-to-end speech recognition using deep RNN models and WFST-based decoding, (2015), pp. 167–174.
J. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, Y. Bengio, in Proc. NIPS. Attention-based models for speech recognition, (2015).
W. Chan, N. Jaitly, Q. Le, O. Vinyals, in Proc. IEEE-ICASSP. Listen, attend and spell: a neural network for large vocabulary conversational speech recognition, (2016).
H. Hadian, H. Sameti, D. Povey, S. Khudanpur, in Proc. INTERSPEECH. End-to-end speech recognition using lattice-free MMI, (2018).
S. Watanabe, T. Hori, S. Kim, J. R. Hershey, T. Hayashi, Hybrid CTC/attention architecture for end-to-end speech recognition. IEEE J. Sel. Top. Sig. Process. 11(8), 1240–1253 (2017).
Article
Google Scholar
S. Watanabe, T. Hori, S. Karita, T. Hayashi, J. Nishitoba, Y. Unno, N. Enrique, Y. Soplin, J. Heymann, M. Wiesner, N. Chen, A. Renduchintala, T. Ochiai, in Proc. INTERSPEECH. Espnet: end-to-end speech processing toolkit, (2018).
S. Ueno, H. Inaguma, M. Mimura, T. Kawahara, in Proc. IEEE-ICASSP. Acoustic-to-word attention-based model complemented with character-level CTC-based model, (2018), pp. 5804–5808.
T. Hori, S. Watanabe, Y. Zhang, W. Chan, in Proc. INTERSPEECH. Advances in joint CTC-attention based end-to-end speech recognition with a deep CNN encoder and RNN-LM, (2017).
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, L. Kaiser, I. Polosukhin, in arXiv Preprint Arxiv:1706.03762. Attention is all you need, (2017).
L. Dong, S. Xu, B. Xu, in Proc. IEEE-ICASSP. Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition, (2018).
S. Zhou, S. Xu, B. Xu, in arXiv Preprint Arxiv:1806.05059. Multilingual end-to-end speech recognition with a single transformer on low-resource languages, (2018).
S. Zhou, L. Dong, S. Xu, B. Xu, in arXiv Preprint Arxiv:1805.06239. A comparison of modeling units in sequence-to-sequence speech recognition with the transformer on Mandarin Chinese, (2018).
S. Zhou, L. Dong, S. Xu, B. Xu, in Proc. INTERSPEECH. Syllable-based sequence-to-sequence speech recognition with the transformer in Mandarin Chinese, (2018).
V. M. Shetty, M. Sagaya Mary N.J., in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Improving the performance of transformer based low resource speech recognition for Indian languages, (2020), pp. 8279–8283. https://doi.org/10.1109/ICASSP40776.2020.9053808.
S. Zhou, S. Xu, B. Xu, Multilingual end-to-end speech recognition with a single transformer on low-resource languages. arXiv preprint arXiv:1806.05059 (2018).
B. Zoph, D. Yuret, J. May, K. Knight, Transfer learning for low-resource neural machine translation. arXiv preprint arXiv:1604.02201 (2016).
J. Cho, M. K. Baskar, R. Li, M. Wiesner, S. H. Mallidi, N. Yalta, M. Karafiat, S. Watanabe, T. Hori, in 2018 IEEE Spoken Language Technology Workshop (SLT). Multilingual sequence-to-sequence speech recognition: architecture, transfer learning, and language modeling (IEEE, 2018), pp. 521–527.
J. Meyer, Multi-task and transfer learning in low-resource speech recognition (The University of Arizona, 2019).
S. Dalmia, R. Sanabria, F. Metze, A. W. Black, in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Sequence-based multi-lingual low resource speech recognition (IEEE, 2018), pp. 4909–4913.
L. Pan, S. Li, L. Wang, J. Dang, in Proc. APSIPA ASC. Effective training end-to-end ASR systems for low-resource Lhasa dialect of Tibetan language, (2019).
T. N. Sainath, R. Prabhavalka, S. Kumar, S. Lee, A. Kannan, D. Rybach, V. Schogol, P. Nguyen, B. Li, Y. Wu, Z. Chen, C. Chiu, in Proc. IEEE-ICASSP. No need for a lexicon? Evaluating the value of the pronunciation lexicon in end-to-end models, (2018), pp. 5859–5863.
A. Kannan, Y. Wu, P. Nguyen, T. N. Sainath, Z. Chen, R. Prabhavalkar, in Proc. IEEE-ICASSP. An analysis of incorporating an external language model into a sequence-to-sequence model, (2018), pp. 5824–5828.
H. Bu, J. Du, X. Na, B. Wu, H. Zheng, in Proc. Oriental COCOSDA. AIShell-1: an open-source Mandarin speech corpus and a speech recognition baseline, (2017).