Skip to main content

Articles

Page 3 of 11

  1. In this article, we adapted five recent SSL methods to the task of audio classification. The first two methods, namely Deep Co-Training (DCT) and Mean Teacher (MT), involve two collaborative neural networks. T...

    Authors: Léo Cances, Etienne Labbé and Thomas Pellegrini
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:23
  2. In this paper, we propose a supervised single-channel speech enhancement method that combines Kullback-Leibler (KL) divergence-based non-negative matrix factorization (NMF) and a hidden Markov model (NMF-HMM)....

    Authors: Yang Xiang, Liming Shi, Jesper Lisby Højvang, Morten Højfeldt Rasmussen and Mads Græsbøll Christensen
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:22
  3. Automatic speech and music activity detection (SMAD) is an enabling task that can help segment, index, and pre-process audio content in radio broadcast and TV programs. However, due to copyright concerns and t...

    Authors: Yun-Ning Hung, Chih-Wei Wu, Iroro Orife, Aaron Hipple, William Wolcott and Alexander Lerch
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:21
  4. Speech emotion recognition is a key branch of affective computing. Nowadays, it is common to detect emotional diseases through speech emotion recognition. Various detection methods of emotion recognition, such...

    Authors: Jinxing Gao, Diqun Yan and Mingyu Dong
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:20
  5. Most state-of-the-art speech systems use deep neural networks (DNNs). These systems require a large amount of data to be learned. Hence, training state-of-the-art frameworks on under-resourced speech challenge...

    Authors: Vincent Roger, Jérôme Farinas and Julien Pinquier
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:19
  6. PlugSonic is a series of web- and mobile-based applications designed to edit samples and apply audio effects (PlugSonic Sample) and create and experience dynamic and navigable soundscapes and sonic narratives ...

    Authors: Marco Comunità, Andrea Gerino and Lorenzo Picinali
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:18
  7. Language recognition based on embedding aims to maximize inter-class variance and minimize intra-class variance. Previous researches are limited to the training constraint of a single centroid, which cannot ac...

    Authors: Minghang Ju, Yanyan Xu, Dengfeng Ke and Kaile Su
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:17
  8. By means of spatial clustering and time-frequency masking, a mixture of multiple speakers and noise can be separated into the underlying signal components. The parameters of a model, such as a complex angular ...

    Authors: Alexander Bohlender, Lucas Van Severen, Jonathan Sterckx and Nilesh Madhu
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:16
  9. To improve the sound quality of hearing devices, equalization filters can be used to achieve acoustic transparency, i.e., listening with the device in the ear is perceptually similar to the open ear. The equal...

    Authors: Henning Schepker, Florian Denk, Birger Kollmeier and Simon Doclo
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:15
  10. Subtitles are a crucial component of Digital Entertainment Content (DEC such as movies and TV shows) localization. With ever increasing catalog (≈ 2M titles) and localization expansion (30+ languages), automat...

    Authors: Honey Gupta and Mayank Sharma
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:14
  11. In lossless audio compression, the predictive residuals must remain sparse when entropy coding is applied. The sign algorithm (SA) is a conventional method for minimizing the magnitudes of residuals; however, ...

    Authors: Taiyo Mineo and Hayaru Shouno
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:12
  12. Multiple predominant instrument recognition in polyphonic music is addressed using decision level fusion of three transformer-based architectures on an ensemble of visual representations. The ensemble consists...

    Authors: Lekshmi Chandrika Reghunath and Rajeev Rajan
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:11
  13. The domain of spatial audio comprises methods for capturing, processing, and reproducing audio content that contains spatial information. Data-based methods are those that operate directly on the spatial infor...

    Authors: Maximo Cobos, Jens Ahrens, Konrad Kowalczyk and Archontis Politis
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:10
  14. Head-related transfer function (HRTF) individualization can improve the perception of binaural sound. The interaural time difference (ITD) of the HRTF is a relevant cue for sound localization, especially in az...

    Authors: Pablo Gutierrez-Parera, Jose J. Lopez, Javier M. Mora-Merchan and Diego F. Larios
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:9
  15. Humans can recognize someone’s identity through their voice and describe the timbral phenomena of voices. Likewise, the singing voice also has timbral phenomena. In vocal pedagogy, vocal teachers listen and th...

    Authors: Yanze Xu, Weiqing Wang, Huahua Cui, Mingyang Xu and Ming Li
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:8
  16. Polyphonic sound event detection aims to detect the types of sound events that occur in given audio clips, and their onset and offset times, in which multiple sound events may occur simultaneously. Deep learni...

    Authors: Haitao Li, Shuguo Yang and Wenwu Wang
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:5
  17. In this study, we propose a methodology for separating a singing voice from musical accompaniment in a monaural musical mixture. The proposed method uses robust principal component analysis (RPCA), followed by...

    Authors: Wen-Hsing Lai and Siou-Lin Wang
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:4
  18. One of the greatest challenges in the development of binaural machine audition systems is the disambiguation between front and back audio sources, particularly in complex spatial audio scenes. The goal of this...

    Authors: Sławomir K. Zieliński, Paweł Antoniuk, Hyunkook Lee and Dale Johnson
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:3
  19. Conventional automatic speech recognition (ASR) and emerging end-to-end (E2E) speech recognition have achieved promising results after being provided with sufficient resources. However, for low-resource langua...

    Authors: Siqing Qin, Longbiao Wang, Sheng Li, Jianwu Dang and Lixin Pan
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:2
  20. In this paper, we propose a novel algorithm for blind source extraction (BSE) of a moving acoustic source recorded by multiple microphones. The algorithm is based on independent vector extraction (IVE) where t...

    Authors: Jakub Janský, Zbyněk Koldovský, Jiří Málek, Tomáš Kounovský and Jaroslav Čmejla
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2022 2022:1
  21. With the sharp booming of online live streaming platforms, some anchors seek profits and accumulate popularity by mixing inappropriate content into live programs. After being blacklisted, these anchors even fo...

    Authors: Jiacheng Yao, Jing Zhang, Jiafeng Li and Li Zhuo
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:45
  22. We present an unsupervised domain adaptation (UDA) method for a lip-reading model that is an image-based speech recognition model. Most of conventional UDA methods cannot be applied when the adaptation data co...

    Authors: Yuki Takashima, Ryoichi Takashima, Ryota Tsunoda, Ryo Aihara, Tetsuya Takiguchi, Yasuo Ariki and Nobuaki Motoyama
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:44
  23. Deep learning techniques are currently being applied in automated text-to-speech (TTS) systems, resulting in significant improvements in performance. However, these methods require large amounts of text-speech...

    Authors: Zolzaya Byambadorj, Ryota Nishimura, Altangerel Ayush, Kengo Ohta and Norihide Kitaoka
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:42
  24. Voice conversion is to transform a source speaker to the target one, while keeping the linguistic content unchanged. Recently, one-shot voice conversion gradually becomes a hot topic for its potentially wide r...

    Authors: Fangkun Liu, Hui Wang, Renhua Peng, Chengshi Zheng and Xiaodong Li
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:40
  25. This paper presents a new dataset of measured multichannel room impulse responses (RIRs) named dEchorate. It includes annotations of early echo timings and 3D positions of microphones, real sources, and image ...

    Authors: Diego Di Carlo, Pinchas Tandeitnik, Cedrić Foy, Nancy Bertin, Antoine Deleforge and Sharon Gannot
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:39
  26. In this paper, a multichannel learning-based network is proposed for sound source separation in reverberant field. The network can be divided into two parts according to the training strategies. In the first s...

    Authors: You-Siang Chen, Zi-Jie Lin and Mingsian R. Bai
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:38
  27. High-quality rendering of spatial sound fields in real-time is becoming increasingly important with the steadily growing interest in virtual and augmented reality technologies. Typically, a spherical microphon...

    Authors: Johannes M. Arend, Tim Lübeck and Christoph Pörschmann
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:37
  28. Measurements of the directivity of acoustic sound sources must be interpolated in almost all cases, either for spatial upsampling to higher resolution representations of the data, for spatial resampling to ano...

    Authors: David Ackermann, Fabian Brinkmann, Franz Zotter, Malte Kob and Stefan Weinzierl
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:36
  29. The acoustic echo cannot be entirely removed by linear adaptive filters due to the nonlinear relationship between the echo and the far-end signal. Usually, a post-processing module is required to further suppr...

    Authors: Hongsheng Chen, Guoliang Chen, Kai Chen and Jing Lu
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:35
  30. Code-switching (CS) refers to the phenomenon of using more than one language in an utterance, and it presents great challenge to automatic speech recognition (ASR) due to the code-switching property in one utt...

    Authors: Yanhua Long, Shuang Wei, Jie Lian and Yijie Li
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:34
  31. Many modern smart devices are equipped with a microphone array and a loudspeaker (or are able to connect to one). Acoustic echo cancellation algorithms, specifically their multi-microphone variants, are essent...

    Authors: Nili Cohen, Gershon Hazan, Boaz Schwartz and Sharon Gannot
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:33
  32. The minimum mean-square error (MMSE)-based noise PSD estimators have been used widely for speech enhancement. However, the MMSE noise PSD estimators assume that the noise signal changes at a slower rate than t...

    Authors: Sujan Kumar Roy and Kuldip K. Paliwal
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:32
  33. The performance of speech recognition systems trained with neutral utterances degrades significantly when these systems are tested with emotional speech. Since everybody can speak emotionally in the real-world...

    Authors: Masoud Geravanchizadeh, Elnaz Forouhandeh and Meysam Bashirpour
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:31
  34. If music is the language of the universe, musical note onsets may be the syllables for this language. Not only do note onsets define the temporal pattern of a musical piece, but their time-frequency characteri...

    Authors: Mina Mounir, Peter Karsmakers and Toon van Waterschoot
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:30
  35. To improve the performance of speech enhancement in a complex noise environment, a joint constrained dictionary learning method for single-channel speech enhancement is proposed, which solves the “cross projec...

    Authors: Linhui Sun, Yunyi Bu, Pingan Li and Zihao Wu
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:29
  36. The last decade brought significant advances in automatic speech recognition (ASR) thanks to the evolution of deep learning methods. ASR systems evolved from pipeline-based systems, that modeled hand-crafted s...

    Authors: Alexandru-Lucian Georgescu, Alessandro Pappalardo, Horia Cucu and Michaela Blott
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:28
  37. Many end-to-end approaches have been proposed to detect predefined keywords. For scenarios of multi-keywords, there are still two bottlenecks that need to be resolved: (1) the distribution of important data th...

    Authors: Gui-Xin Shi, Wei-Qiang Zhang, Guan-Bo Wang, Jing Zhao, Shu-Zhou Chai and Ze-Yu Zhao
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:27
  38. Lately, the self-attention mechanism has marked a new milestone in the field of automatic speech recognition (ASR). Nevertheless, its performance is susceptible to environmental intrusions as the system predic...

    Authors: Lujun Li, Yikai Kang, Yuchen Shi, Ludwig Kürzinger, Tobias Watzel and Gerhard Rigoll
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:26
  39. Due to the ad hoc nature of wireless acoustic sensor networks, the position of the sensor nodes is typically unknown. This contribution proposes a technique to estimate the position and orientation of the sens...

    Authors: Tobias Gburrek, Joerg Schmalenstroeer and Reinhold Haeb-Umbach
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:25
  40. Estimating time-frequency domain masks for single-channel speech enhancement using deep learning methods has recently become a popular research field with promising results. In this paper, we propose a novel comp...

    Authors: Ziyi Xu, Samy Elshamy, Ziyue Zhao and Tim Fingscheidt
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:24
  41. Multiple sound source localization is a hot issue of concern in recent years. The Single Source Zone (SSZ) based localization methods achieve good performance due to the detection and utilization of the Time-F...

    Authors: Maoshen Jia, Shang Gao and Changchun Bao
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:23
  42. In this paper, we propose a novel feature compensation algorithm based on independent noise estimation, which employs a Gaussian mixture model (GMM) with fewer Gaussian components to rapidly estimate the noise...

    Authors: Yong Lü, Han Lin, Pingping Wu and Yitao Chen
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:22
  43. When designing closed-loop electro-acoustic systems, which can commonly be found in hearing aids or public address systems, the most challenging task is canceling and/or suppressing the feedback caused by the ...

    Authors: Marco Gimm, Philipp Bulling and Gerhard Schmidt
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:21
  44. Recently, the non-intrusive speech quality assessment method has attracted a lot of attention since it does not require the original reference signals. At the same time, neural networks began to be applied to ...

    Authors: Miao Liu, Jing Wang, Weiming Yi and Fang Liu
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2021 2021:20

Who reads the journal?

Learn more about the impact the EURASIP Journal on Audio, Speech, and Music Processing has worldwide

Annual Journal Metrics

  • Citation Impact 2023
    Journal Impact Factor: 1.7
    5-year Journal Impact Factor: 1.6
    Source Normalized Impact per Paper (SNIP): 1.051
    SCImago Journal Rank (SJR): 0.414

    Speed 2023
    Submission to first editorial decision (median days): 17
    Submission to acceptance (median days): 154

    Usage 2023
    Downloads: 368,607
    Altmetric mentions: 70

Funding your APC

​​​​​​​Open access funding and policy support by SpringerOpen​​

​​​​We offer a free open access support service to make it easier for you to discover and apply for article-processing charge (APC) funding. Learn more here