Skip to content

Advertisement

Articles

Page 2 of 7

  1. Content type: Research

    In multichannel spatial audio coding (SAC), the accurate representations of virtual sounds and the efficient compressions of spatial parameters are the key to perfect reproduction of spatial sound effects in 3...

    Authors: Li Gao, Ruimin Hu, Xiaochen Wang, Gang Li, Yuhong Yang and Weiping Tu

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2016 2016:13

    Published on:

  2. Content type: Research

    Adaptive muting method using an optimized parametric shaping function as a part of the ITU-T G.722 Appendix IV packet loss concealment algorithm is proposed. The packet loss concealment algorithm incorporating...

    Authors: Bong-Ki Lee and Joon-Hyuk Chang

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2016 2016:11

    Published on:

  3. Content type: Research

    Automatic speech recognition is becoming more ubiquitous as recognition performance improves, capable devices increase in number, and areas of new application open up. Neural network acoustic models that can u...

    Authors: Ryan Price, Ken-ichi Iso and Koichi Shinoda

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2016 2016:10

    Published on:

  4. Content type: Research

    Current text-to-speech systems do not support the effective provision of the semantics and the cognitive aspects of the documents’ typographic cues (e.g., font type, style, and size). A novel approach is intro...

    Authors: Dimitrios Tsonos and Georgios Kouroupetroglou

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2016 2016:8

    Published on:

  5. Content type: Research

    Audio classification, classifying audio segments into broad categories such as speech, non-speech, and silence, is an important front-end problem in speech signal processing. Dozens of features have been propo...

    Authors: Xu-Kui Yang, Liang He, Dan Qu, Wei-Qiang Zhang and Michael T. Johnson

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2016 2016:9

    Published on:

  6. Content type: Research

    Time-frequency (T-F) masking is an effective method for stereo speech source separation. However, reliable estimation of the T-F mask from sound mixtures is a challenging task, especially when room reverberati...

    Authors: Yang Yu, Wenwu Wang and Peng Han

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2016 2016:7

    Published on:

  7. Content type: Research

    Today, a large amount of audio data is available on the web in the form of audiobooks, podcasts, video lectures, video blogs, news bulletins, etc. In addition, we can effortlessly record and store audio data s...

    Authors: Tejas Godambe, Sai Krishna Rallabandi, Suryakanth V. Gangashetty, Ashraf Alkhairy and Afshan Jafri

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2016 2016:6

    Published on:

  8. Content type: Research

    Indian classical music, including its two varieties, Carnatic and Hindustani music, has a rich music tradition and enjoys a wide audience from various parts of the world. The Carnatic music which is more popul...

    Authors: Stanly Mammen, Ilango Krishnamurthi, A. Jalaja Varma and G. Sujatha

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2016 2016:5

    Published on:

  9. Content type: Research

    Query-by-example spoken term detection (QbE STD) aims at retrieving data from a speech repository given an acoustic query containing the term of interest as input. Nowadays, it is receiving much interest due t...

    Authors: Javier Tejedor, Doroteo T. Toledano, Paula Lopez-Otero, Laura Docio-Fernandez and Carmen Garcia-Mateo

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2016 2016:1

    Published on:

  10. Content type: Research

    Using a proper distribution function for speech signal or for its representations is of crucial importance in statistical-based speech processing algorithms. Although the most commonly used probability density...

    Authors: Ali Aroudi, Hadi Veisi, Hossein Sameti and Zahra Mafakheri

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:35

    Published on:

  11. Content type: Research

    Using a recently proposed informed spatial filter, it is possible to effectively and robustly reduce reverberation from speech signals captured in noisy environments using multiple microphones. Late reverberat...

    Authors: Sebastian Braun and Emanuël A. P. Habets

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:34

    Published on:

  12. Content type: Research

    Audio segmentation is important as a pre-processing task to improve the performance of many speech technology tasks and, therefore, it has an undoubted research interest. This paper describes the database, the...

    Authors: Diego Castán, David Tavarez, Paula Lopez-Otero, Javier Franco-Pedroso, Héctor Delgado, Eva Navas, Laura Docio-Fernández, Daniel Ramos, Javier Serrano, Alfonso Ortega and Eduardo Lleida

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:33

    Published on:

  13. Content type: Research

    The need to have a large amount of parallel data is a large hurdle for the practical use of voice conversion (VC). This paper presents a novel framework of exemplar-based VC that only requires a small number o...

    Authors: Ryo Aihara, Takao Fujii, Toru Nakashika, Tetsuya Takiguchi and Yasuo Ariki

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:32

    Published on:

  14. Content type: Research

    In this paper, a semi-fragile and blind digital speech watermarking technique for online speaker recognition systems based on the discrete wavelet packet transform (DWPT) and quantization index modulation (QIM...

    Authors: Mohammad Ali Nematollahi, Mohammad Ali Akhaee, S. A. R. Al-Haddad and Hamurabi Gamboa-Rosales

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:31

    Published on:

  15. Content type: Research

    The presence of physical task stress induces changes in the speech production system which in turn produces changes in speaking behavior. This results in measurable acoustic correlates including changes to for...

    Authors: Keith W. Godin and John H. L. Hansen

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:29

    Published on:

  16. Content type: Research

    The identity of musical instruments is reflected in the acoustic attributes of musical notes played with them. Recently, it has been argued that these characteristics of musical identity (or timbre) can be bes...

    Authors: Kailash Patil and Mounya Elhilali

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:27

    Published on:

  17. Content type: Research

    In recent years, deep learning has not only permeated the computer vision and speech recognition research fields but also fields such as acoustic event detection (AED). One of the aims of AED is to detect and ...

    Authors: Miquel Espi, Masakiyo Fujimoto, Keisuke Kinoshita and Tomohiro Nakatani

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:26

    Published on:

  18. Content type: Research

    A multimodal voice conversion (VC) method for noisy environments is proposed. In our previous non-negative matrix factorization (NMF)-based VC method, source and target exemplars are extracted from parallel tr...

    Authors: Kenta Masaka, Ryo Aihara, Tetsuya Takiguchi and Yasuo Ariki

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:24

    Published on:

  19. Content type: Research

    In this paper we present the Latin Music Mood Database, an extension of the Latin Music Database but for the task of music mood/emotion classification. The method for assigning mood labels to the musical recor...

    Authors: Carolina L. dos Santos and Carlos N. Silla Jr

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:23

    Published on:

  20. Content type: Research

    Support vector machines (SVMs) have played an important role in the state-of-the-art language recognition systems. The recently developed extreme learning machine (ELM) tends to have better scalability and ach...

    Authors: Jiaming Xu, Wei-Qiang Zhang, Jia Liu and Shanhong Xia

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:22

    Published on:

  21. Content type: Research

    Spoken term detection (STD) aims at retrieving data from a speech repository given a textual representation of the search term. Nowadays, it is receiving much interest due to the large volume of multimedia inf...

    Authors: Javier Tejedor, Doroteo T. Toledano, Paula Lopez-Otero, Laura Docio-Fernandez, Carmen Garcia-Mateo, Antonio Cardenal, Julian David Echeverry-Correa, Alejandro Coucheiro-Limeres, Julia Olcoz and Antonio Miguel

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:21

    Published on:

  22. Content type: Research

    The automatic recognition of MP3 compressed speech presents a challenge to the current systems due to the lossy nature of compression which causes irreversible degradation of the speech wave. This article eval...

    Authors: Michal Borsky, Petr Pollak and Petr Mizera

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:20

    Published on:

  23. Content type: Research

    We investigate the automatic recognition of emotions in the singing voice and study the worth and role of a variety of relevant acoustic parameters. The data set contains phrases and vocalises sung by eight re...

    Authors: Florian Eyben, Gláucia L Salomão, Johan Sundberg, Klaus R Scherer and Björn W Schuller

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:19

    Published on:

  24. Content type: Research

    Over recent years, i-vector-based framework has been proven to provide state-of-the-art performance in speaker verification. Each utterance is projected onto a total factor space and is represented by a low-di...

    Authors: Wei Li, Tianfan Fu and Jie Zhu

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:18

    Published on:

  25. Content type: Research

    Manual transcription of audio databases for the development of automatic speech recognition (ASR) systems is a costly and time-consuming process. In the context of deriving acoustic models adapted to a specifi...

    Authors: Petr Motlicek, David Imseng, Blaise Potard, Philip N. Garner and Ivan Himawan

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:17

    Published on:

  26. Content type: Research

    Singer identification is a difficult topic in music information retrieval because background instrumental music is included with singing voice which reduces performance of a system. One of the main disadvantag...

    Authors: Tushar Ratanpara and Narendra Patel

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:16

    Published on:

  27. Content type: Research

    Optimal automatic speech recognition (ASR) takes place when the recognition system is tested under circumstances identical to those in which it was trained. However, in the actual real world, there exist many ...

    Authors: Randa Al-Wakeel, Mahmoud Shoman, Magdy Aboul-Ela and Sherif Abdou

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:15

    Published on:

  28. Content type: Research

    The Farrow-structure-based steerable broadband beamformer (FSBB) is particularly useful in the applications where sound source of interest may move around a wide angular range. However, in contrast with conven...

    Authors: Tiannan Wang and Huawei Chen

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:14

    Published on:

  29. Content type: Research

    This paper presents an objective speech quality model, ViSQOL, the Virtual Speech Quality Objective Listener. It is a signal-based, full-reference, intrusive metric that models human speech quality perception ...

    Authors: Andrew Hines, Jan Skoglund, Anil C Kokaram and Naomi Harte

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:13

    Published on:

  30. Content type: Research

    Deep neural network (DNN)-based approaches have been shown to be effective in many automatic speech recognition systems. However, few works have focused on DNNs for distant-talking speaker recognition. In this...

    Authors: Zhaofeng Zhang, Longbiao Wang, Atsuhiko Kai, Takanori Yamada, Weifeng Li and Masahiro Iwahashi

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:12

    Published on:

  31. Content type: Research

    Estimating the directions of arrival (DOAs) of multiple simultaneous mobile sound sources is an important step for various audio signal processing applications. In this contribution, we present an approach tha...

    Authors: Caleb Rascon, Gibran Fuentes and Ivan Meza

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:11

    Published on:

  32. Content type: Research

    Acoustic data transmission (ADT) forms a branch of the audio data hiding techniques with its capability of communicating data in short-range aerial space between a loudspeaker and a microphone. In this paper, ...

    Authors: Kiho Cho, Jae Choi and Nam Soo Kim

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:10

    Published on:

  33. Content type: Research

    Automatic diagnosis and monitoring of Alzheimer’s disease can have a significant impact on society as well as the well-being of patients. The part of the brain cortex that processes language abilities is one o...

    Authors: Ali Khodabakhsh, Fatih Yesil, Ekrem Guner and Cenk Demiroglu

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:9

    Published on:

  34. Content type: Research

    This paper presents a voice conversion (VC) method that utilizes conditional restricted Boltzmann machines (CRBMs) for each speaker to obtain high-order speaker-independent spaces where voice features are conv...

    Authors: Toru Nakashika, Tetsuya Takiguchi and Yasuo Ariki

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:8

    Published on:

  35. Content type: Research

    Automatic forensic voice comparison (FVC) systems employed in forensic casework have often relied on Gaussian Mixture Model - Universal Background Models (GMM-UBMs) for modelling with relatively little researc...

    Authors: Chee Cheun Huang, Julien Epps and Tharmarajah Thiruvaran

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:7

    Published on:

  36. Content type: Research

    Music identification via audio fingerprinting has been an active research field in recent years. In the real-world environment, music queries are often deformed by various interferences which typically include...

    Authors: Xiu Zhang, Bilei Zhu, Linwei Li, Wei Li, Xiaoqiang Li, Wei Wang, Peizhong Lu and Wenqiang Zhang

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:6

    Published on:

  37. Content type: Research

    Owing to the suprasegmental behavior of emotional speech, turn-level features have demonstrated a better success than frame-level features for recognition-related tasks. Conventionally, such features are obtai...

    Authors: Mohit Shah, Chaitali Chakrabarti and Andreas Spanias

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:4

    Published on:

  38. Content type: Research

    In this paper, an initial feature vector based on the combination of the wavelet packet decomposition (WPD) and the Mel frequency cepstral coefficients (MFCCs) is proposed. For optimizing the initial feature v...

    Authors: Vahid Majidnezhad

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:3

    Published on:

  39. Content type: Research

    Deep neural networks (DNNs) have gained remarkable success in speech recognition, partially attributed to the flexibility of DNN models in learning complex patterns of speech signals. This flexibility, however...

    Authors: Shi Yin, Chao Liu, Zhiyong Zhang, Yiye Lin, Dong Wang, Javier Tejedor, Thomas Fang Zheng and Yinguo Li

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:2

    Published on:

  40. Content type: Research

    Vocal tremor has been simulated using a high-dimensional discrete vocal fold model. Specifically, respiratory, phonatory, and articulatory tremors have been modeled as instabilities in six parameters of the mo...

    Authors: Rubén Fraile, Juan Ignacio Godino-Llorente and Malte Kob

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2015 2015:1

    Published on:

  41. Content type: Research

    Currently, acoustic spoken language recognition (SLR) and phonotactic SLR systems are widely used language recognition systems. To achieve better performance, researchers combine multiple subsystems with the r...

    Authors: Wei-Wei Liu, Wei-Qiang Zhang, Michael T Johnson and Jia Liu

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:42

    Published on:

  42. Content type: Research

    Speech technology is firmly rooted in daily life, most notably in command-and-control (C&C) applications. C&C usability downgrades quickly, however, when used by people with non-standard speech. We pursue a fu...

    Authors: Bart Ons, Jort F Gemmeke and Hugo Van hamme

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2014 2014:43

    Published on:

Who reads the journal?

Learn more about the impact the EURASIP Journal on Audio, Speech, and Music Processing has worldwide

2017 Journal Metrics

Funding your APC

​​​​​​​Open access funding and policy support by SpringerOpen​​

​​​​We offer a free open access support service to make it easier for you to discover and apply for article-processing charge (APC) funding. Learn more here


Advertisement