Skip to main content

Articles

Page 7 of 12

  1. The presence of physical task stress induces changes in the speech production system which in turn produces changes in speaking behavior. This results in measurable acoustic correlates including changes to for...

    Authors: Keith W. Godin and John H. L. Hansen
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:29
  2. The identity of musical instruments is reflected in the acoustic attributes of musical notes played with them. Recently, it has been argued that these characteristics of musical identity (or timbre) can be bes...

    Authors: Kailash Patil and Mounya Elhilali
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:27
  3. In recent years, deep learning has not only permeated the computer vision and speech recognition research fields but also fields such as acoustic event detection (AED). One of the aims of AED is to detect and ...

    Authors: Miquel Espi, Masakiyo Fujimoto, Keisuke Kinoshita and Tomohiro Nakatani
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:26
  4. A multimodal voice conversion (VC) method for noisy environments is proposed. In our previous non-negative matrix factorization (NMF)-based VC method, source and target exemplars are extracted from parallel tr...

    Authors: Kenta Masaka, Ryo Aihara, Tetsuya Takiguchi and Yasuo Ariki
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:24
  5. In this paper we present the Latin Music Mood Database, an extension of the Latin Music Database but for the task of music mood/emotion classification. The method for assigning mood labels to the musical recor...

    Authors: Carolina L. dos Santos and Carlos N. Silla Jr
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:23
  6. Support vector machines (SVMs) have played an important role in the state-of-the-art language recognition systems. The recently developed extreme learning machine (ELM) tends to have better scalability and ach...

    Authors: Jiaming Xu, Wei-Qiang Zhang, Jia Liu and Shanhong Xia
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:22
  7. Spoken term detection (STD) aims at retrieving data from a speech repository given a textual representation of the search term. Nowadays, it is receiving much interest due to the large volume of multimedia inf...

    Authors: Javier Tejedor, Doroteo T. Toledano, Paula Lopez-Otero, Laura Docio-Fernandez, Carmen Garcia-Mateo, Antonio Cardenal, Julian David Echeverry-Correa, Alejandro Coucheiro-Limeres, Julia Olcoz and Antonio Miguel
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:21
  8. The automatic recognition of MP3 compressed speech presents a challenge to the current systems due to the lossy nature of compression which causes irreversible degradation of the speech wave. This article eval...

    Authors: Michal Borsky, Petr Pollak and Petr Mizera
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:20
  9. We investigate the automatic recognition of emotions in the singing voice and study the worth and role of a variety of relevant acoustic parameters. The data set contains phrases and vocalises sung by eight re...

    Authors: Florian Eyben, Gláucia L Salomão, Johan Sundberg, Klaus R Scherer and Björn W Schuller
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:19
  10. Over recent years, i-vector-based framework has been proven to provide state-of-the-art performance in speaker verification. Each utterance is projected onto a total factor space and is represented by a low-di...

    Authors: Wei Li, Tianfan Fu and Jie Zhu
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:18
  11. Manual transcription of audio databases for the development of automatic speech recognition (ASR) systems is a costly and time-consuming process. In the context of deriving acoustic models adapted to a specifi...

    Authors: Petr Motlicek, David Imseng, Blaise Potard, Philip N. Garner and Ivan Himawan
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:17
  12. Singer identification is a difficult topic in music information retrieval because background instrumental music is included with singing voice which reduces performance of a system. One of the main disadvantag...

    Authors: Tushar Ratanpara and Narendra Patel
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:16
  13. Optimal automatic speech recognition (ASR) takes place when the recognition system is tested under circumstances identical to those in which it was trained. However, in the actual real world, there exist many ...

    Authors: Randa Al-Wakeel, Mahmoud Shoman, Magdy Aboul-Ela and Sherif Abdou
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:15
  14. The Farrow-structure-based steerable broadband beamformer (FSBB) is particularly useful in the applications where sound source of interest may move around a wide angular range. However, in contrast with conven...

    Authors: Tiannan Wang and Huawei Chen
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:14
  15. This paper presents an objective speech quality model, ViSQOL, the Virtual Speech Quality Objective Listener. It is a signal-based, full-reference, intrusive metric that models human speech quality perception ...

    Authors: Andrew Hines, Jan Skoglund, Anil C Kokaram and Naomi Harte
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:13
  16. Deep neural network (DNN)-based approaches have been shown to be effective in many automatic speech recognition systems. However, few works have focused on DNNs for distant-talking speaker recognition. In this...

    Authors: Zhaofeng Zhang, Longbiao Wang, Atsuhiko Kai, Takanori Yamada, Weifeng Li and Masahiro Iwahashi
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:12
  17. Estimating the directions of arrival (DOAs) of multiple simultaneous mobile sound sources is an important step for various audio signal processing applications. In this contribution, we present an approach tha...

    Authors: Caleb Rascon, Gibran Fuentes and Ivan Meza
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:11
  18. Acoustic data transmission (ADT) forms a branch of the audio data hiding techniques with its capability of communicating data in short-range aerial space between a loudspeaker and a microphone. In this paper, ...

    Authors: Kiho Cho, Jae Choi and Nam Soo Kim
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:10
  19. Automatic diagnosis and monitoring of Alzheimer’s disease can have a significant impact on society as well as the well-being of patients. The part of the brain cortex that processes language abilities is one o...

    Authors: Ali Khodabakhsh, Fatih Yesil, Ekrem Guner and Cenk Demiroglu
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:9
  20. This paper presents a voice conversion (VC) method that utilizes conditional restricted Boltzmann machines (CRBMs) for each speaker to obtain high-order speaker-independent spaces where voice features are conv...

    Authors: Toru Nakashika, Tetsuya Takiguchi and Yasuo Ariki
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:8
  21. Automatic forensic voice comparison (FVC) systems employed in forensic casework have often relied on Gaussian Mixture Model - Universal Background Models (GMM-UBMs) for modelling with relatively little researc...

    Authors: Chee Cheun Huang, Julien Epps and Tharmarajah Thiruvaran
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:7
  22. Music identification via audio fingerprinting has been an active research field in recent years. In the real-world environment, music queries are often deformed by various interferences which typically include...

    Authors: Xiu Zhang, Bilei Zhu, Linwei Li, Wei Li, Xiaoqiang Li, Wei Wang, Peizhong Lu and Wenqiang Zhang
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:6
  23. Owing to the suprasegmental behavior of emotional speech, turn-level features have demonstrated a better success than frame-level features for recognition-related tasks. Conventionally, such features are obtai...

    Authors: Mohit Shah, Chaitali Chakrabarti and Andreas Spanias
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:4
  24. In this paper, an initial feature vector based on the combination of the wavelet packet decomposition (WPD) and the Mel frequency cepstral coefficients (MFCCs) is proposed. For optimizing the initial feature v...

    Authors: Vahid Majidnezhad
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:3
  25. Deep neural networks (DNNs) have gained remarkable success in speech recognition, partially attributed to the flexibility of DNN models in learning complex patterns of speech signals. This flexibility, however...

    Authors: Shi Yin, Chao Liu, Zhiyong Zhang, Yiye Lin, Dong Wang, Javier Tejedor, Thomas Fang Zheng and Yinguo Li
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:2
  26. Vocal tremor has been simulated using a high-dimensional discrete vocal fold model. Specifically, respiratory, phonatory, and articulatory tremors have been modeled as instabilities in six parameters of the mo...

    Authors: Rubén Fraile, Juan Ignacio Godino-Llorente and Malte Kob
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:1
  27. Currently, acoustic spoken language recognition (SLR) and phonotactic SLR systems are widely used language recognition systems. To achieve better performance, researchers combine multiple subsystems with the r...

    Authors: Wei-Wei Liu, Wei-Qiang Zhang, Michael T Johnson and Jia Liu
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:42
  28. Speech technology is firmly rooted in daily life, most notably in command-and-control (C&C) applications. C&C usability downgrades quickly, however, when used by people with non-standard speech. We pursue a fu...

    Authors: Bart Ons, Jort F Gemmeke and Hugo Van hamme
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:43
  29. The full modulation spectrum is a high-dimensional representation of one-dimensional audio signals. Most previous research in automatic speech recognition converted this very rich representation into the equiv...

    Authors: Sara Ahmadi, Seyed Mohammad Ahadi, Bert Cranen and Lou Boves
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:36
  30. Building a voice-operated system for learning disabled users is a difficult task that requires a considerable amount of time and effort. Due to the wide spectrum of disabilities and their different related pho...

    Authors: Marek Bohac, Michaela Kucharova, Zoraida Callejas, Jan Nouza and Petr Červa
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:39
  31. In this paper, we propose a semi-blind, imperceptible, and robust digital audio watermarking algorithm. The proposed algorithm is based on cascading two well-known transforms: the discrete wavelet transform an...

    Authors: Ali Al-Haj
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:37
  32. Model-based speech enhancement algorithms that employ trained models, such as codebooks, hidden Markov models, Gaussian mixture models, etc., containing representations of speech such as linear predictive coef...

    Authors: Devireddy Hanumantha Rao Naidu and Sriram Srinivasan
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:35
  33. The task of automatic retrieval and extraction of lyrics from the web is of great importance to different Music Information Retrieval applications. However, despite its importance, very little research has bee...

    Authors: Rafael P Ribeiro, Murilo AP Almeida and Carlos N Silla Jr
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:27
  34. This paper studies a novel audio segmentation-by-classification approach based on factor analysis. The proposed technique compensates the within-class variability by using class-dependent factor loading matric...

    Authors: Diego Castán, Alfonso Ortega, Antonio Miguel and Eduardo Lleida
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:34
  35. This paper proposes a new speech enhancement (SE) algorithm utilizing constraints to the Wiener gain function which is capable of working at 10 dB and lower signal-to-noise ratios (SNRs). The wavelet threshold...

    Authors: Yanna Ma and Akinori Nishihara
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:32
  36. The current paper examines influences of speech rate on Fujisaki model parameters based on read speech from the BonnTempo-Corpus containing productions by 12 native speakers of German at five different intende...

    Authors: Hansjörg Mixdorff, Adrian Leemann and Volker Dellwo
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:33
  37. In many speech communication applications, robust localization and tracking of multiple speakers in noisy and reverberant environments are of major importance. Several algorithms to tackle this problem have be...

    Authors: Stephan Gerlach, Jörg Bitzer, Stefan Goetze and Simon Doclo
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:31
  38. In a bid to enhance the search performance, this paper presents an improved version of reduced candidate mechanism (RCM), an algebraic codebook search conducted on an algebraic code-excited linear prediction (...

    Authors: Ning-Yun Ku, Cheng-Yu Yeh and Shaw-Hwa Hwang
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:30
  39. This paper proposes two novel approaches for parameter estimation of a superpositional intonation model. These approaches present linguistic and paralinguistic assumptions for initializing a pre-existing stand...

    Authors: Humberto M Torres, Jorge A Gurlekian, Hansjörg Mixdorff and Hartmut Pfitzinger
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:28
  40. In this paper, unsupervised learning is used to separate percussive and harmonic sounds from monaural non-vocal polyphonic signals. Our algorithm is based on a modified non-negative matrix factorization (NMF) ...

    Authors: Francisco Jesus Canadas-Quesada, Pedro Vera-Candeas, Nicolas Ruiz-Reyes, Julio Carabias-Orti and Pablo Cabanas-Molero
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:26
  41. Composers may not provide instructions for playing their works, especially for instrument solos, and therefore, different musicians may give very different interpretations of the same work. Such differences us...

    Authors: Yi-Ju Lin, Tien-Ming Wang, Ta-Chun Chen, Yin-Lin Chen, Wei-Chen Chang and Alvin WY Su
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:25

Who reads the journal?

Learn more about the impact the EURASIP Journal on Audio, Speech, and Music Processing has worldwide

Annual Journal Metrics

  • Citation Impact 2023
    Journal Impact Factor: 1.7
    5-year Journal Impact Factor: 1.6
    Source Normalized Impact per Paper (SNIP): 1.051
    SCImago Journal Rank (SJR): 0.414

    Speed 2023
    Submission to first editorial decision (median days): 17
    Submission to acceptance (median days): 154

    Usage 2023
    Downloads: 368,607
    Altmetric mentions: 70

Funding your APC

​​​​​​​Open access funding and policy support by SpringerOpen​​

​​​​We offer a free open access support service to make it easier for you to discover and apply for article-processing charge (APC) funding. Learn more here