Skip to main content

Articles

Page 4 of 11

  1. We propose an algorithm for the blind separation of single-channel audio signals. It is based on a parametric model that describes the spectral properties of the sounds of musical instruments independently of ...

    Authors: Sören Schulze and Emily J. King
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:6
  2. We propose a method of dynamically registering out-of-vocabulary (OOV) words by assigning the pronunciations of these words to pre-inserted OOV tokens, editing the pronunciations of the tokens. To do this, we ...

    Authors: Norihide Kitaoka, Bohan Chen and Yuya Obashi
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:4
  3. Instrumentalplaying techniques such as vibratos, glissandos, and trills often denote musical expressivity, both in classical and folk contexts. However, most existing approaches to music similarity retrieval f...

    Authors: Vincent Lostanlen, Christian El-Hajj, Mathias Rossignol, Grégoire Lafay, Joakim Andén and Mathieu Lagrange
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:3
  4. In this paper, a study addressing the task of tracking multiple concurrent speakers in reverberant conditions is presented. Since both past and future observations can contribute to the current location estima...

    Authors: Yuval Dorfan, Boaz Schwartz and Sharon Gannot
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:2
  5. The progressive paradigm is a promising strategy to optimize network performance for speech enhancement purposes. Recent works have shown different strategies to improve the accuracy of speech enhancement solu...

    Authors: Jorge Llombart, Dayana Ribas, Antonio Miguel, Luis Vicente, Alfonso Ortega and Eduardo Lleida
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:1
  6. In real applications, environmental effects such as additive noise and room reverberation lead to a mismatch between training and testing signals that substantially reduces the performance of far-field speaker...

    Authors: Masoud Geravanchizadeh and Sina Ghalamiosgouei
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:20
  7. In this paper, we investigate the performance of two deep learning paradigms for the audio-based tasks of acoustic scene, environmental sound and domestic activity classification. In particular, a convolutiona...

    Authors: Shahin Amiriparian, Maurice Gerczuk, Sandra Ottl, Lukas Stappen, Alice Baird, Lukas Koebe and Björn Schuller
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:19
  8. In this article, we conduct a comprehensive simulation study for the optimal scores of speaker recognition systems that are based on speaker embedding. For that purpose, we first revisit the optimal scores for...

    Authors: Dong Wang
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:18
  9. Depression is a widespread mental health problem around the world with a significant burden on economies. Its early diagnosis and treatment are critical to reduce the costs and even save lives. One key aspect ...

    Authors: Cenk Demiroglu, Aslı Beşirli, Yasin Ozkanca and Selime Çelik
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:17
  10. Drone-embedded sound source localization (SSL) has interesting application perspective in challenging search and rescue scenarios due to bad lighting conditions or occlusions. However, the problem gets complic...

    Authors: Alif Bin Abdul Qayyum, K. M. Naimul Hassan, Adrita Anika, Md. Farhan Shadiq, Md Mushfiqur Rahman, Md. Tariqul Islam, Sheikh Asif Imran, Shahruk Hossain and Mohammad Ariful Haque
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:16
  11. Humanoid robots require to use microphone arrays to acquire speech signals from the human communication partner while suppressing noise, reverberation, and interferences. Unlike many other applications, microp...

    Authors: Gongping Huang, Jingdong Chen, Jacob Benesty, Israel Cohen and Xudong Zhao
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:15
  12. Microphone leakage or crosstalk is a common problem in multichannel close-talk audio recordings (e.g., meetings or live music performances), which occurs when a target signal does not only couple into its dedi...

    Authors: Patrick Meyer, Samy Elshamy and Tim Fingscheidt
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:14
  13. A method to locate sound sources using an audio recording system mounted on an unmanned aerial vehicle (UAV) is proposed. The method introduces extension algorithms to apply on top of a baseline approach, whic...

    Authors: Benjamin Yen and Yusuke Hioka
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:13
  14. Estimation problems like room geometry estimation and localization of acoustic reflectors are of great interest and importance in robot and drone audition. Several methods for tackling these problems exist, bu...

    Authors: Usama Saqib, Sharon Gannot and Jesper Rindom Jensen
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:12
  15. Ego-noise, i.e., the noise a robot causes by its own motions, significantly corrupts the microphone signal and severely impairs the robot’s capability to interact seamlessly with its environment. Therefore, su...

    Authors: Alexander Schmidt, Andreas Brendel, Thomas Haubner and Walter Kellermann
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:11
  16. A keyword spotting algorithm implemented on an embedded system using a depthwise separable convolutional neural network classifier is reported. The proposed system was derived from a high-complexity system wit...

    Authors: Peter Mølgaard Sørensen, Bastian Epp and Tobias May
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:10
  17. In this work, we present an ensemble for automated audio classification that fuses different types of features extracted from audio files. These features are evaluated, compared, and fused with the goal of pro...

    Authors: Loris Nanni, Yandre M. G. Costa, Rafael L. Aguiar, Rafael B. Mangolin, Sheryl Brahnam and Carlos N. Silla Jr.
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:8
  18. In this paper, we introduce a quadratic approach for single-channel noise reduction. The desired signal magnitude is estimated by applying a linear filter to a modified version of the observations’ vector. The...

    Authors: Gal Itzhak, Jacob Benesty and Israel Cohen
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:7
  19. In order to improve the performance of hand-crafted features to detect playback speech, two discriminative features, constant-Q variance-based octave coefficients and constant-Q mean-based octave coefficients,...

    Authors: Jichen Yang, Longting Xu, Bo Ren and Yunyun Ji
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:6
  20. This paper presents a new approach based on recurrent neural networks (RNN) to the multiclass audio segmentation task whose goal is to classify an audio signal as speech, music, noise or a combination of these...

    Authors: Pablo Gimeno, Ignacio Viñals, Alfonso Ortega, Antonio Miguel and Eduardo Lleida
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:5
  21. Binaural sound source localization is an important and widely used perceptually based method and it has been applied to machine learning studies by many researchers based on head-related transfer function (HRT...

    Authors: Jing Wang, Jin Wang, Kai Qian, Xiang Xie and Jingming Kuang
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:4
  22. Attention-based encoder-decoder models have recently shown competitive performance for automatic speech recognition (ASR) compared to conventional ASR systems. However, how to employ attention models for onlin...

    Authors: Junfeng Hou, Wu Guo, Yan Song and Li-Rong Dai
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:3
  23. Experimental data combining complementary measures based on the oral airflow signal is presented in this paper, exploring the view that European Portuguese voiced stops are produced in a similar fashion to Ger...

    Authors: Luis M. T. Jesus and Maria Conceição Costa
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:2
  24. In this paper, we use empirical mode decomposition and Hurst-based mode selection (EMDH) along with deep learning architecture using a convolutional neural network (CNN) to improve the recognition of dysarthri...

    Authors: Mohammed Sidi Yakoub, Sid-ahmed Selouani, Brahim-Fares Zaidi and Asma Bouchair
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2020 2020:1
  25. We present a novel model adaptation approach to deal with data variability for speaker diarization in a broadcast environment. Expensive human annotated data can be used to mitigate the domain mismatch by mean...

    Authors: Ignacio Viñals, Alfonso Ortega, Jesús Villalba, Antonio Miguel and Eduardo Lleida
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:24
  26. In this paper, we propose a score-informed source separation framework based on non-negative matrix factorization (NMF) and dynamic time warping (DTW) that suits for both offline and online systems. The propos...

    Authors: Antonio Jesús Munoz-Montoro, Julio José Carabias-Orti, Pedro Vera-Candeas, Francisco Jesús Canadas-Quesada and Nicolás Ruiz-Reyes
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:23
  27. Text-to-speech (TTS) synthesis systems have been widely used in general-purpose applications based on the generation of speech. Nonetheless, there are some domains, such as storytelling or voice output aid dev...

    Authors: Marc Freixes, Francesc Alías and Joan Claudi Socoró
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:22
  28. So-called full-face masks are essential for fire fighters to ensure respiratory protection in smoke diving incidents. While such masks are absolutely necessary for protection purposes on one hand, they impair the...

    Authors: Michael Brodersen, Achim Volmer and Gerhard Schmidt
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:21
  29. According to the encoding and decoding mechanism of binaural cue coding (BCC), in this paper, the speech and noise are considered as left channel signal and right channel signal of the BCC framework, respectiv...

    Authors: Xianyun Wang and Changchun Bao
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:20
  30. Phonetic information is one of the most essential components of a speech signal, playing an important role for many speech processing tasks. However, it is difficult to integrate phonetic information into spea...

    Authors: Yi Liu, Liang He, Jia Liu and Michael T. Johnson
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:19
  31. A method called joint connectionist temporal classification (CTC)-attention-based speech recognition has recently received increasing focus and has achieved impressive performance. A hybrid end-to-end architec...

    Authors: Chu-Xiong Qin, Wen-Lin Zhang and Dan Qu
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:18
  32. Voice conversion (VC) is a technique of exclusively converting speaker-specific information in the source speech while preserving the associated phonemic information. Non-negative matrix factorization (NMF)-ba...

    Authors: Yuki Takashima, Toru Nakashika, Tetsuya Takiguchi and Yasuo Ariki
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:17
  33. Search on speech (SoS) is a challenging area due to the huge amount of information stored in audio and video repositories. Spoken term detection (STD) is an SoS-related task aiming to retrieve data from a spee...

    Authors: Javier Tejedor, Doroteo T. Toledano, Paula Lopez-Otero, Laura Docio-Fernandez, Ana R. Montalvo, Jose M. Ramirez, Mikel Peñagarikano and Luis Javier Rodriguez-Fuentes
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:16
  34. Voice-enabled interaction systems in domestic environments have attracted significant interest recently, being the focus of smart home research projects and commercial voice assistant home devices. Within the ...

    Authors: Panagiotis Giannoulis, Gerasimos Potamianos and Petros Maragos
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:15
  35. Speech emotion recognition methods combining articulatory information with acoustic features have been previously shown to improve recognition performance. Collection of articulatory data on a large scale may ...

    Authors: Mohit Shah, Ming Tu, Visar Berisha, Chaitali Chakrabarti and Andreas Spanias
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:14
  36. The huge amount of information stored in audio and video repositories makes search on speech (SoS) a priority area nowadays. Within SoS, Query-by-Example Spoken Term Detection (QbE STD) aims to retrieve data f...

    Authors: Javier Tejedor, Doroteo T. Toledano, Paula Lopez-Otero, Laura Docio-Fernandez, Mikel Peñagarikano, Luis Javier Rodriguez-Fuentes and Antonio Moreno-Sandoval
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:13
  37. In this paper, we apply a latent class model (LCM) to the task of speaker diarization. LCM is similar to Patrick Kenny’s variational Bayes (VB) method in that it uses soft information and avoids premature hard...

    Authors: Liang He, Xianhong Chen, Can Xu, Yi Liu, Jia Liu and Michael T. Johnson
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:12
  38. We propose a new method for music detection from broadcasting contents using the convolutional neural networks with a Mel-scale kernel. In this detection task, music segments should be annotated from the broad...

    Authors: Byeong-Yong Jang, Woon-Haeng Heo, Jung-Hyun Kim and Oh-Wook Kwon
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:11
  39. Singing voice analysis has been a topic of research to assist several applications in the domain of music information retrieval system. One such major area is singer identification (SID). There has been enormo...

    Authors: Deepali Y. Loni and Shaila Subbaraman
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:10
  40. Audio signals represent a wide diversity of acoustic events, from background environmental noise to spoken communication. Machine learning models such as neural networks have already been proposed for audio si...

    Authors: Diego de Benito-Gorron, Alicia Lozano-Diez, Doroteo T. Toledano and Joaquin Gonzalez-Rodriguez
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:9
  41. There are many studies on detecting human speech from artificially generated speech and automatic speaker verification (ASV) that aim to detect and identify whether the given speech belongs to a given speaker....

    Authors: Zeyan Oo, Longbiao Wang, Khomdet Phapatanaburi, Meng Liu, Seiichi Nakagawa, Masahiro Iwahashi and Jianwu Dang
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:8
  42. In this paper, an adaptive averaging a priori SNR estimation employing critical band processing is proposed. The proposed method modifies the current decision-directed a priori SNR estimation to achieve faster...

    Authors: Lara Nahma, Pei Chee Yong, Hai Huyen Dam and Sven Nordholm
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:7
  43. In response to renewed interest in virtual and augmented reality, the need for high-quality spatial audio systems has emerged. The reproduction of immersive and realistic virtual sound requires high resolution...

    Authors: Zamir Ben-Hur, David Lou Alon, Boaz Rafaely and Ravish Mehra
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:5
  44. This paper proposes two novel linguistic features extracted from text input for prosody generation in a Mandarin text-to-speech system. The first feature is the punctuation confidence (PC), which measures the ...

    Authors: Chen-Yu Chiang, Yu-Ping Hung, Han-Yun Yeh, I-Bin Liao and Chen-Ming Pan
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:4
  45. Current automatic speech recognition (ASR) systems achieve over 90–95% accuracy, depending on the methodology applied and datasets used. However, the level of accuracy decreases significantly when the same ASR...

    Authors: Kacper Radzikowski, Robert Nowak, Le Wang and Osamu Yoshie
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:3
  46. Filter banks on spectrums play an important role in many audio applications. Traditionally, the filters are linearly distributed on perceptual frequency scale such as Mel scale. To make the output smoother, th...

    Authors: Teng Zhang and Ji Wu
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2019 2019:1
  47. This paper deals with a project of Automatic Bird Species Recognition Based on Bird Vocalization. Eighteen bird species of 6 different families were analyzed. At first, human factor cepstral coefficients repre...

    Authors: Jiri Stastny, Michal Munk and Lubos Juranek
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2018 2018:19

Who reads the journal?

Learn more about the impact the EURASIP Journal on Audio, Speech, and Music Processing has worldwide

Annual Journal Metrics

  • 2022 Citation Impact
    2.4 - 2-year Impact Factor
    2.0 - 5-year Impact Factor
    1.081 - SNIP (Source Normalized Impact per Paper)
    0.458 - SJR (SCImago Journal Rank)

    2023 Speed
    17 days submission to first editorial decision for all manuscripts (Median)
    154 days submission to accept (Median)

    2023 Usage 
    368,607 downloads
    70 Altmetric mentions 

Funding your APC

​​​​​​​Open access funding and policy support by SpringerOpen​​

​​​​We offer a free open access support service to make it easier for you to discover and apply for article-processing charge (APC) funding. Learn more here