Skip to main content

Articles

Page 7 of 11

  1. The full modulation spectrum is a high-dimensional representation of one-dimensional audio signals. Most previous research in automatic speech recognition converted this very rich representation into the equiv...

    Authors: Sara Ahmadi, Seyed Mohammad Ahadi, Bert Cranen and Lou Boves
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:36
  2. Building a voice-operated system for learning disabled users is a difficult task that requires a considerable amount of time and effort. Due to the wide spectrum of disabilities and their different related pho...

    Authors: Marek Bohac, Michaela Kucharova, Zoraida Callejas, Jan Nouza and Petr Červa
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:39
  3. In this paper, we propose a semi-blind, imperceptible, and robust digital audio watermarking algorithm. The proposed algorithm is based on cascading two well-known transforms: the discrete wavelet transform an...

    Authors: Ali Al-Haj
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:37
  4. Model-based speech enhancement algorithms that employ trained models, such as codebooks, hidden Markov models, Gaussian mixture models, etc., containing representations of speech such as linear predictive coef...

    Authors: Devireddy Hanumantha Rao Naidu and Sriram Srinivasan
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:35
  5. The task of automatic retrieval and extraction of lyrics from the web is of great importance to different Music Information Retrieval applications. However, despite its importance, very little research has bee...

    Authors: Rafael P Ribeiro, Murilo AP Almeida and Carlos N Silla Jr
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:27
  6. This paper studies a novel audio segmentation-by-classification approach based on factor analysis. The proposed technique compensates the within-class variability by using class-dependent factor loading matric...

    Authors: Diego Castán, Alfonso Ortega, Antonio Miguel and Eduardo Lleida
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:34
  7. This paper proposes a new speech enhancement (SE) algorithm utilizing constraints to the Wiener gain function which is capable of working at 10 dB and lower signal-to-noise ratios (SNRs). The wavelet threshold...

    Authors: Yanna Ma and Akinori Nishihara
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:32
  8. The current paper examines influences of speech rate on Fujisaki model parameters based on read speech from the BonnTempo-Corpus containing productions by 12 native speakers of German at five different intende...

    Authors: Hansjörg Mixdorff, Adrian Leemann and Volker Dellwo
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:33
  9. In many speech communication applications, robust localization and tracking of multiple speakers in noisy and reverberant environments are of major importance. Several algorithms to tackle this problem have be...

    Authors: Stephan Gerlach, Jörg Bitzer, Stefan Goetze and Simon Doclo
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:31
  10. In a bid to enhance the search performance, this paper presents an improved version of reduced candidate mechanism (RCM), an algebraic codebook search conducted on an algebraic code-excited linear prediction (...

    Authors: Ning-Yun Ku, Cheng-Yu Yeh and Shaw-Hwa Hwang
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:30
  11. This paper proposes two novel approaches for parameter estimation of a superpositional intonation model. These approaches present linguistic and paralinguistic assumptions for initializing a pre-existing stand...

    Authors: Humberto M Torres, Jorge A Gurlekian, Hansjörg Mixdorff and Hartmut Pfitzinger
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:28
  12. In this paper, unsupervised learning is used to separate percussive and harmonic sounds from monaural non-vocal polyphonic signals. Our algorithm is based on a modified non-negative matrix factorization (NMF) ...

    Authors: Francisco Jesus Canadas-Quesada, Pedro Vera-Candeas, Nicolas Ruiz-Reyes, Julio Carabias-Orti and Pablo Cabanas-Molero
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:26
  13. Composers may not provide instructions for playing their works, especially for instrument solos, and therefore, different musicians may give very different interpretations of the same work. Such differences us...

    Authors: Yi-Ju Lin, Tien-Ming Wang, Ta-Chun Chen, Yin-Lin Chen, Wei-Chen Chang and Alvin WY Su
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:25
  14. This paper investigates the estimation of underlying articulatory targets of Thai vowels as invariant representation of vocal tract shapes by means of analysis-by-synthesis based on acoustic data. The basic id...

    Authors: Santitham Prom-on, Peter Birkholz and Yi Xu
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:23
  15. The paper describes an auditory processing-based feature extraction strategy for robust speech recognition in environments, where conventional automatic speech recognition (ASR) approaches are not successful. ...

    Authors: Hari Krishna Maganti and Marco Matassoni
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:21
  16. In this paper, a two-stage scheme is proposed to deal with the difficult problem of acoustic echo cancellation (AEC) in single-channel scenario in the presence of noise. In order to overcome the major challeng...

    Authors: Upal Mahbub, Shaikh Anowarul Fattah, Wei-Ping Zhu and M Omair Ahmad
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:20
  17. Neural network language models (NNLM) have been proved to be quite powerful for sequence modeling, including feed-forward NNLM (FNNLM), recurrent NNLM (RNNLM), etc. One main issue concerned for NNLM is the hea...

    Authors: Yongzhe Shi, Wei-Qiang Zhang, Meng Cai and Jia Liu
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:19
  18. When several acoustic sources are simultaneously active in a meeting room scenario, and both the position of the sources and the identity of the time-overlapped sound classes have been estimated, the problem o...

    Authors: Rupayan Chakraborty, Climent Nadeu and Taras Butko
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:18
  19. Speech enhancement has an increasing demand in mobile communications and faces a great challenge in a real ambient noisy environment. This paper develops an effective spatial-frequency domain speech enhancemen...

    Authors: Yue Xian Zou, Peng Wang, Yong Qing Wang, Christian H Ritz and Jiangtao Xi
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:17
  20. It was recently shown that delta-sigma quantization (DSQ) can be used for optimal multiple description (MD) coding of Gaussian sources. The DSQ scheme combined oversampling, prediction, and noise-shaping in or...

    Authors: Jack Leegaard, Jan Østergaard, Søren Holdt Jensen and Ram Zamir
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:16
  21. Previously, a dereverberation method based on generalized spectral subtraction (GSS) using multi-channel least mean-squares (MCLMS) has been proposed. The results of speech recognition experiments showed that ...

    Authors: Zhaofeng Zhang, Longbiao Wang and Atsuhiko Kai
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:15
  22. The robustness of n-gram language models depends on the quality of text data on which they have been trained. The text corpora collected from various resources such as web pages or electronic documents are charac...

    Authors: Ján Staš, Jozef Juhár and Daniel Hládek
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:14
  23. We present a feature enhancement method that uses neural networks (NNs) to map the reverberant feature in a log-melspectral domain to its corresponding anechoic feature. The mapping is done by cascade NNs trai...

    Authors: Aditya Arie Nugraha, Kazumasa Yamamoto and Seiichi Nakagawa
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:13
  24. Decision tree-clustered context-dependent hidden semi-Markov models (HSMMs) are typically used in statistical parametric speech synthesis to represent probability densities of acoustic features given contextua...

    Authors: Soheil Khorram, Hossein Sameti, Fahimeh Bahmaninezhad, Simon King and Thomas Drugman
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:12
  25. Eigenphone-based speaker adaptation outperforms conventional maximum likelihood linear regression (MLLR) and eigenvoice methods when there is sufficient adaptation data. However, it suffers from severe over-fi...

    Authors: Wen-Lin Zhang, Wei-Qiang Zhang, Dan Qu and Bi-Cheng Li
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:11
  26. Three-dimensional (3D) audio technologies are booming with the success of 3D video technology. The surge in audio channels makes its huge data unacceptable for transmitting bandwidth and storage media, and the...

    Authors: Shi Dong, Ruimin Hu, Xiaochen Wang, Yuhong Yang and Weiping Tu
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:10
  27. An approach is proposed for creating location-specific audio textures for virtual location-exploration services. The presented approach creates audio textures by processing a small amount of audio recorded at ...

    Authors: Toni Heittola, Annamaria Mesaros, Dani Korpi, Antti Eronen and Tuomas Virtanen
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:9
  28. In this paper, an analytical approach to estimate the instantaneous frequencies of a multicomponent signal is presented. A non-stationary signal composed of oscillation modes or resonances is described by a mu...

    Authors: Mohammadali Sebghati, Hamidreza Amindavar and James A Ritcey
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:8
  29. This paper presents an optical music recognition (OMR) system to process the handwritten musical scores of Kunqu Opera written in Gong-Che Notation (GCN). First, it introduces the background of Kunqu Opera and GC...

    Authors: Gen-Fang Chen and Jia-Shing Sheu
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:7
  30. We present in this paper a voice conversion (VC) method for a person with an articulation disorder resulting from athetoid cerebral palsy. The movement of such speakers is limited by their athetoid symptoms, a...

    Authors: Ryo Aihara, Ryoichi Takashima, Tetsuya Takiguchi and Yasuo Ariki
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:5
  31. We propose a novel approach of integrating exemplar-based template matching with statistical modeling to improve continuous speech recognition. We choose the template unit to be context-dependent phone segment...

    Authors: Xie Sun and Yunxin Zhao
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:4
  32. This paper proposes a new aliasing cancelation algorithm for the transition between non-aliased coding and transform coding with time domain aliasing cancelation (TDAC). It is effectively utilized for unified ...

    Authors: Jeongook Song and Hong-Goo Kang
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:3
  33. We propose an integrative method of recognizing gestures such as pointing, accompanying speech. Speech generated simultaneously with gestures can assist in the recognition of gestures, and since this occurs in...

    Authors: Madoka Miki, Norihide Kitaoka, Chiyomi Miyajima, Takanori Nishino and Kazuya Takeda
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:2
  34. Bandwidth extension is an effective technique for enhancing the quality of audio signals by reconstructing their high-frequency components. In this paper, a novel blind bandwidth extension method is proposed b...

    Authors: Chang-Chun Bao, Xin Liu, Yong-Tao Sha and Xing-Tao Zhang
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:1
  35. Prosody and prosodic boundaries carry significant information regarding linguistics and paralinguistics and are important aspects of speech. In the field of prosodic event detection, many local acoustic featur...

    Authors: Junhong Zhao, Wei-Qiang Zhang, Hua Yuan, Michael T Johnson, Jia Liu and Shanhong Xia
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2013 2013:30
  36. In this paper, we propose a novel noise-robustness method known as weighted sub-band histogram equalization (WS-HEQ) to improve speech recognition accuracy in noise-corrupted environments. Considering the obse...

    Authors: Jeih-weih Hung and Hao-teng Fan
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2013 2013:29
  37. The framework of voice conversion system is expected to emphasize both the static and dynamic characteristics of the speech signal. The conventional approaches like Mel frequency cepstrum coefficients and line...

    Authors: Jagannath H Nirmal, Mukesh A Zaveri, Suprava Patnaik and Pramod H Kachare
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2013 2013:28
  38. This paper investigates real-time N-dimensional wideband sound source localization in outdoor (far-field) and low-degree reverberation cases, using a simple N-microphone arrangement. Outdoor sound source localiza...

    Authors: Ali Parsayan and Seyed Mohammad Ahadi
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2013 2013:27

    The Correction to this article has been published in EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:24

  39. Affective computing, especially from speech, is one of the key steps toward building more natural and effective human-machine interaction. In recent years, several emotional speech corpora in different languag...

    Authors: Caglar Oflazoglu and Serdar Yildirim
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2013 2013:26
  40. The performance of thresholding-based methods for speech enhancement largely depends upon the estimation of the exact threshold value. In this paper, a new thresholding-based speech enhancement approach, where...

    Authors: Tahsina Farah Sanam and Celia Shahnaz
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2013 2013:25
  41. This paper investigates multi-modal aspects of audiovisual quality assessment for interactive communication services. It shows how perceived auditory and visual qualities integrate to an overall audiovisual qu...

    Authors: Benjamin Belmudez and Sebastian Möller
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2013 2013:24
  42. Query-by-Example Spoken Term Detection (QbE STD) aims at retrieving data from a speech data repository given an acoustic query containing the term of interest as input. Nowadays, it has been receiving much int...

    Authors: Javier Tejedor, Doroteo T Toledano, Xavier Anguera, Amparo Varona, Lluís F Hurtado, Antonio Miguel and José Colás
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2013 2013:23
  43. The recurrent neural network language model (RNNLM) has shown significant promise for statistical language modeling. In this work, a new class-based output layer method is introduced to further improve the RNN...

    Authors: Yongzhe Shi, Wei-Qiang Zhang, Jia Liu and Michael T Johnson
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2013 2013:22
  44. This paper proposes a novel and robust voice activity detection (VAD) algorithm utilizing long-term spectral flatness measure (LSFM) which is capable of working at 10 dB and lower signal-to-noise ratios(SNRs)....

    Authors: Yanna Ma and Akinori Nishihara
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2013 2013:87

    The Erratum to this article has been published in EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:30

Who reads the journal?

Learn more about the impact the EURASIP Journal on Audio, Speech, and Music Processing has worldwide

Annual Journal Metrics

  • 2022 Citation Impact
    2.4 - 2-year Impact Factor
    2.0 - 5-year Impact Factor
    1.081 - SNIP (Source Normalized Impact per Paper)
    0.458 - SJR (SCImago Journal Rank)

    2023 Speed
    17 days submission to first editorial decision for all manuscripts (Median)
    154 days submission to accept (Median)

    2023 Usage 
    368,607 downloads
    70 Altmetric mentions 

Funding your APC

​​​​​​​Open access funding and policy support by SpringerOpen​​

​​​​We offer a free open access support service to make it easier for you to discover and apply for article-processing charge (APC) funding. Learn more here