Skip to main content

Advertisement

Articles

Page 3 of 7

  1. Content type: Research

    Singer identification is a difficult topic in music information retrieval because background instrumental music is included with singing voice which reduces performance of a system. One of the main disadvantag...

    Authors: Tushar Ratanpara and Narendra Patel

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:16

    Published on:

  2. Content type: Research

    Optimal automatic speech recognition (ASR) takes place when the recognition system is tested under circumstances identical to those in which it was trained. However, in the actual real world, there exist many ...

    Authors: Randa Al-Wakeel, Mahmoud Shoman, Magdy Aboul-Ela and Sherif Abdou

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:15

    Published on:

  3. Content type: Research

    The Farrow-structure-based steerable broadband beamformer (FSBB) is particularly useful in the applications where sound source of interest may move around a wide angular range. However, in contrast with conven...

    Authors: Tiannan Wang and Huawei Chen

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:14

    Published on:

  4. Content type: Research

    This paper presents an objective speech quality model, ViSQOL, the Virtual Speech Quality Objective Listener. It is a signal-based, full-reference, intrusive metric that models human speech quality perception ...

    Authors: Andrew Hines, Jan Skoglund, Anil C Kokaram and Naomi Harte

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:13

    Published on:

  5. Content type: Research

    Deep neural network (DNN)-based approaches have been shown to be effective in many automatic speech recognition systems. However, few works have focused on DNNs for distant-talking speaker recognition. In this...

    Authors: Zhaofeng Zhang, Longbiao Wang, Atsuhiko Kai, Takanori Yamada, Weifeng Li and Masahiro Iwahashi

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:12

    Published on:

  6. Content type: Research

    Estimating the directions of arrival (DOAs) of multiple simultaneous mobile sound sources is an important step for various audio signal processing applications. In this contribution, we present an approach tha...

    Authors: Caleb Rascon, Gibran Fuentes and Ivan Meza

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:11

    Published on:

  7. Content type: Research

    Acoustic data transmission (ADT) forms a branch of the audio data hiding techniques with its capability of communicating data in short-range aerial space between a loudspeaker and a microphone. In this paper, ...

    Authors: Kiho Cho, Jae Choi and Nam Soo Kim

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:10

    Published on:

  8. Content type: Research

    Automatic diagnosis and monitoring of Alzheimer’s disease can have a significant impact on society as well as the well-being of patients. The part of the brain cortex that processes language abilities is one o...

    Authors: Ali Khodabakhsh, Fatih Yesil, Ekrem Guner and Cenk Demiroglu

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:9

    Published on:

  9. Content type: Research

    This paper presents a voice conversion (VC) method that utilizes conditional restricted Boltzmann machines (CRBMs) for each speaker to obtain high-order speaker-independent spaces where voice features are conv...

    Authors: Toru Nakashika, Tetsuya Takiguchi and Yasuo Ariki

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:8

    Published on:

  10. Content type: Research

    Automatic forensic voice comparison (FVC) systems employed in forensic casework have often relied on Gaussian Mixture Model - Universal Background Models (GMM-UBMs) for modelling with relatively little researc...

    Authors: Chee Cheun Huang, Julien Epps and Tharmarajah Thiruvaran

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:7

    Published on:

  11. Content type: Research

    Music identification via audio fingerprinting has been an active research field in recent years. In the real-world environment, music queries are often deformed by various interferences which typically include...

    Authors: Xiu Zhang, Bilei Zhu, Linwei Li, Wei Li, Xiaoqiang Li, Wei Wang, Peizhong Lu and Wenqiang Zhang

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:6

    Published on:

  12. Content type: Research

    Owing to the suprasegmental behavior of emotional speech, turn-level features have demonstrated a better success than frame-level features for recognition-related tasks. Conventionally, such features are obtai...

    Authors: Mohit Shah, Chaitali Chakrabarti and Andreas Spanias

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:4

    Published on:

  13. Content type: Research

    In this paper, an initial feature vector based on the combination of the wavelet packet decomposition (WPD) and the Mel frequency cepstral coefficients (MFCCs) is proposed. For optimizing the initial feature v...

    Authors: Vahid Majidnezhad

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:3

    Published on:

  14. Content type: Research

    Deep neural networks (DNNs) have gained remarkable success in speech recognition, partially attributed to the flexibility of DNN models in learning complex patterns of speech signals. This flexibility, however...

    Authors: Shi Yin, Chao Liu, Zhiyong Zhang, Yiye Lin, Dong Wang, Javier Tejedor, Thomas Fang Zheng and Yinguo Li

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:2

    Published on:

  15. Content type: Research

    Vocal tremor has been simulated using a high-dimensional discrete vocal fold model. Specifically, respiratory, phonatory, and articulatory tremors have been modeled as instabilities in six parameters of the mo...

    Authors: Rubén Fraile, Juan Ignacio Godino-Llorente and Malte Kob

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:1

    Published on:

  16. Content type: Research

    Currently, acoustic spoken language recognition (SLR) and phonotactic SLR systems are widely used language recognition systems. To achieve better performance, researchers combine multiple subsystems with the r...

    Authors: Wei-Wei Liu, Wei-Qiang Zhang, Michael T Johnson and Jia Liu

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:42

    Published on:

  17. Content type: Research

    Speech technology is firmly rooted in daily life, most notably in command-and-control (C&C) applications. C&C usability downgrades quickly, however, when used by people with non-standard speech. We pursue a fu...

    Authors: Bart Ons, Jort F Gemmeke and Hugo Van hamme

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:43

    Published on:

  18. Content type: Research

    The full modulation spectrum is a high-dimensional representation of one-dimensional audio signals. Most previous research in automatic speech recognition converted this very rich representation into the equiv...

    Authors: Sara Ahmadi, Seyed Mohammad Ahadi, Bert Cranen and Lou Boves

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:36

    Published on:

  19. Content type: Research

    Building a voice-operated system for learning disabled users is a difficult task that requires a considerable amount of time and effort. Due to the wide spectrum of disabilities and their different related pho...

    Authors: Marek Bohac, Michaela Kucharova, Zoraida Callejas, Jan Nouza and Petr Červa

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:39

    Published on:

  20. Content type: Research

    In this paper, we propose a semi-blind, imperceptible, and robust digital audio watermarking algorithm. The proposed algorithm is based on cascading two well-known transforms: the discrete wavelet transform an...

    Authors: Ali Al-Haj

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:37

    Published on:

  21. Content type: Research

    Model-based speech enhancement algorithms that employ trained models, such as codebooks, hidden Markov models, Gaussian mixture models, etc., containing representations of speech such as linear predictive coef...

    Authors: Devireddy Hanumantha Rao Naidu and Sriram Srinivasan

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:35

    Published on:

  22. Content type: Research

    The task of automatic retrieval and extraction of lyrics from the web is of great importance to different Music Information Retrieval applications. However, despite its importance, very little research has bee...

    Authors: Rafael P Ribeiro, Murilo AP Almeida and Carlos N Silla Jr

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:27

    Published on:

  23. Content type: Research

    This paper studies a novel audio segmentation-by-classification approach based on factor analysis. The proposed technique compensates the within-class variability by using class-dependent factor loading matric...

    Authors: Diego Castán, Alfonso Ortega, Antonio Miguel and Eduardo Lleida

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:34

    Published on:

  24. Content type: Research

    This paper proposes a new speech enhancement (SE) algorithm utilizing constraints to the Wiener gain function which is capable of working at 10 dB and lower signal-to-noise ratios (SNRs). The wavelet threshold...

    Authors: Yanna Ma and Akinori Nishihara

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:32

    Published on:

  25. Content type: Research

    The current paper examines influences of speech rate on Fujisaki model parameters based on read speech from the BonnTempo-Corpus containing productions by 12 native speakers of German at five different intende...

    Authors: Hansjörg Mixdorff, Adrian Leemann and Volker Dellwo

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:33

    Published on:

  26. Content type: Research

    In many speech communication applications, robust localization and tracking of multiple speakers in noisy and reverberant environments are of major importance. Several algorithms to tackle this problem have be...

    Authors: Stephan Gerlach, Jörg Bitzer, Stefan Goetze and Simon Doclo

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:31

    Published on:

  27. Content type: Research

    In a bid to enhance the search performance, this paper presents an improved version of reduced candidate mechanism (RCM), an algebraic codebook search conducted on an algebraic code-excited linear prediction (...

    Authors: Ning-Yun Ku, Cheng-Yu Yeh and Shaw-Hwa Hwang

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:30

    Published on:

  28. Content type: Research

    This paper proposes two novel approaches for parameter estimation of a superpositional intonation model. These approaches present linguistic and paralinguistic assumptions for initializing a pre-existing stand...

    Authors: Humberto M Torres, Jorge A Gurlekian, Hansjörg Mixdorff and Hartmut Pfitzinger

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:28

    Published on:

  29. Content type: Research

    In this paper, unsupervised learning is used to separate percussive and harmonic sounds from monaural non-vocal polyphonic signals. Our algorithm is based on a modified non-negative matrix factorization (NMF) ...

    Authors: Francisco Jesus Canadas-Quesada, Pedro Vera-Candeas, Nicolas Ruiz-Reyes, Julio Carabias-Orti and Pablo Cabanas-Molero

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:26

    Published on:

  30. Content type: Research

    Composers may not provide instructions for playing their works, especially for instrument solos, and therefore, different musicians may give very different interpretations of the same work. Such differences us...

    Authors: Yi-Ju Lin, Tien-Ming Wang, Ta-Chun Chen, Yin-Lin Chen, Wei-Chen Chang and Alvin WY Su

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:25

    Published on:

  31. Content type: Research

    This paper investigates the estimation of underlying articulatory targets of Thai vowels as invariant representation of vocal tract shapes by means of analysis-by-synthesis based on acoustic data. The basic id...

    Authors: Santitham Prom-on, Peter Birkholz and Yi Xu

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:23

    Published on:

  32. Content type: Research

    The paper describes an auditory processing-based feature extraction strategy for robust speech recognition in environments, where conventional automatic speech recognition (ASR) approaches are not successful. ...

    Authors: Hari Krishna Maganti and Marco Matassoni

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:21

    Published on:

  33. Content type: Research

    In this paper, a two-stage scheme is proposed to deal with the difficult problem of acoustic echo cancellation (AEC) in single-channel scenario in the presence of noise. In order to overcome the major challeng...

    Authors: Upal Mahbub, Shaikh Anowarul Fattah, Wei-Ping Zhu and M Omair Ahmad

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:20

    Published on:

  34. Content type: Research

    Neural network language models (NNLM) have been proved to be quite powerful for sequence modeling, including feed-forward NNLM (FNNLM), recurrent NNLM (RNNLM), etc. One main issue concerned for NNLM is the hea...

    Authors: Yongzhe Shi, Wei-Qiang Zhang, Meng Cai and Jia Liu

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:19

    Published on:

  35. Content type: Research

    When several acoustic sources are simultaneously active in a meeting room scenario, and both the position of the sources and the identity of the time-overlapped sound classes have been estimated, the problem o...

    Authors: Rupayan Chakraborty, Climent Nadeu and Taras Butko

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:18

    Published on:

  36. Content type: Research

    Speech enhancement has an increasing demand in mobile communications and faces a great challenge in a real ambient noisy environment. This paper develops an effective spatial-frequency domain speech enhancemen...

    Authors: Yue Xian Zou, Peng Wang, Yong Qing Wang, Christian H Ritz and Jiangtao Xi

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:17

    Published on:

  37. Content type: Research

    It was recently shown that delta-sigma quantization (DSQ) can be used for optimal multiple description (MD) coding of Gaussian sources. The DSQ scheme combined oversampling, prediction, and noise-shaping in or...

    Authors: Jack Leegaard, Jan Østergaard, Søren Holdt Jensen and Ram Zamir

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:16

    Published on:

  38. Content type: Research

    Previously, a dereverberation method based on generalized spectral subtraction (GSS) using multi-channel least mean-squares (MCLMS) has been proposed. The results of speech recognition experiments showed that ...

    Authors: Zhaofeng Zhang, Longbiao Wang and Atsuhiko Kai

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:15

    Published on:

  39. Content type: Research

    The robustness of n-gram language models depends on the quality of text data on which they have been trained. The text corpora collected from various resources such as web pages or electronic documents are charac...

    Authors: Ján Staš, Jozef Juhár and Daniel Hládek

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:14

    Published on:

  40. Content type: Research

    We present a feature enhancement method that uses neural networks (NNs) to map the reverberant feature in a log-melspectral domain to its corresponding anechoic feature. The mapping is done by cascade NNs trai...

    Authors: Aditya Arie Nugraha, Kazumasa Yamamoto and Seiichi Nakagawa

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:13

    Published on:

  41. Content type: Research

    Decision tree-clustered context-dependent hidden semi-Markov models (HSMMs) are typically used in statistical parametric speech synthesis to represent probability densities of acoustic features given contextua...

    Authors: Soheil Khorram, Hossein Sameti, Fahimeh Bahmaninezhad, Simon King and Thomas Drugman

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:12

    Published on:

  42. Content type: Research

    Eigenphone-based speaker adaptation outperforms conventional maximum likelihood linear regression (MLLR) and eigenvoice methods when there is sufficient adaptation data. However, it suffers from severe over-fi...

    Authors: Wen-Lin Zhang, Wei-Qiang Zhang, Dan Qu and Bi-Cheng Li

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:11

    Published on:

  43. Content type: Research

    Three-dimensional (3D) audio technologies are booming with the success of 3D video technology. The surge in audio channels makes its huge data unacceptable for transmitting bandwidth and storage media, and the...

    Authors: Shi Dong, Ruimin Hu, Xiaochen Wang, Yuhong Yang and Weiping Tu

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:10

    Published on:

  44. Content type: Research

    An approach is proposed for creating location-specific audio textures for virtual location-exploration services. The presented approach creates audio textures by processing a small amount of audio recorded at ...

    Authors: Toni Heittola, Annamaria Mesaros, Dani Korpi, Antti Eronen and Tuomas Virtanen

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:9

    Published on:

Latest Tweets

Who reads the journal?

Learn more about the impact the EURASIP Journal on Audio, Speech, and Music Processing has worldwide

Annual Journal Metrics

Funding your APC

​​​​​​​Open access funding and policy support by SpringerOpen​​

​​​​We offer a free open access support service to make it easier for you to discover and apply for article-processing charge (APC) funding. Learn more here


Advertisement