Skip to main content

Articles

Page 8 of 10

  1. A multimicrophone speech enhancement algorithm for binaural hearing aids that preserves interaural time delays was proposed recently. The algorithm is based on multichannel Wiener filtering and relies on a voi...

    Authors: Jasmina Catic, Torsten Dau, JörgM Buchholz and Fredrik Gran
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:840294
  2. A method is described for quantifying the quality of wideband speech codecs. Two parameters are derived from signal-based speech quality model estimations: (i) a wideband equipment impairment factor

    Authors: Sebastian Möller, Nicolas Côté, Valérie Gautier-Turbin, Nobuhiko Kitawaki and Akira Takahashi
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:782731
  3. In multiway loudspeaker systems, digital signal processing techniques have been used to correct the frequency response, the propagation time, and the lobbing errors. These solutions are mainly based on correct...

    Authors: Hmaied Shaiek and JeanMarc Boucher
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:928439
  4. Humans represent sounds to others and receive information about sounds from others using onomatopoeia. Such representation is useful for obtaining and reporting the acoustic features and impressions of actual ...

    Authors: Masayuki Takada, Nozomu Fujisawa, Fumino Obata and Shin-ichiro Iwamiya
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:674248
  5. We give a brief discussion on the amplitude and frequency variation rates of the sinusoid representation of signals. In particular, we derive three inequalities that show that these rates are upper bounded by ...

    Authors: Xue Wen and Mark Sandler
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:941732
  6. This paper presents a method for estimating the amplitude of coincident partials generated by harmonic musical sources (instruments and vocals). It was developed as an alternative to the commonly used interpol...

    Authors: JaymeGarciaArnal Barbedo and George Tzanetakis
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:523791
  7. Nowadays, audio podcasting has been widely used by many online sites such as newspapers, web portals, journals, and so forth, to deliver audio content to users through download or subscription. Within 1 to 30 ...

    Authors: MN Nguyen, Qi Tian and Ping Xue
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:572571
  8. Frequency-domain blind source separation (BSS) performs poorly in high reverberation because the independence assumption collapses at each frequency bins when the number of bins increases. To improve the separ...

    Authors: Lin Wang, Heping Ding and Fuliang Yin
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:797962
  9. Speaker identification performance is almost perfect in neutral talking environments. However, the performance is deteriorated significantly in shouted talking environments. This work is devoted to proposing, ...

    Authors: Ismail Shahin
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:862138
  10. Theoretical and applied environmental sounds research is gaining prominence but progress has been hampered by the lack of a comprehensive, high quality, accessible database of environmental sounds. An ongoing ...

    Authors: Brian Gygi and Valeriy Shafiro
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:654914
  11. This paper presents a model-based method for coding the LSF parameters of LPC speech coders on a "long-term" basis, that is, beyond the usual 20–30 ms frame duration. The objective is to provide efficient LSF ...

    Authors: Laurent Girin
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:597039
  12. Authors: Georg Stemmer, Elmar Nöth and Vijay Parsa
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:835974
  13. When a number of speakers are simultaneously active, for example in meetings or noisy public places, the sources of interest need to be separated from interfering speakers and from each other in order to be ro...

    Authors: Dorothea Kolossa, Ramon Fernandez Astudillo, Eugen Hoffmann and Reinhold Orglmeister
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:651420
  14. The aim of the study is to transpose and extend to a set of environmental sounds the notion of sound descriptors usually used for musical sounds. Four separate primary studies dealing with interior car sounds,...

    Authors: Nicolas Misdariis, Antoine Minard, Patrick Susini, Guillaume Lemaitre, Stephen McAdams and Etienne Parizet
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:362013
  15. Mood of Music is among the most relevant and commercially promising, yet challenging attributes for retrieval in large music collections. In this respect this article first provides a short overview on methods...

    Authors: Björn Schuller, Johannes Dorfner and Gerhard Rigoll
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:735854
  16. This work explores the effect of mismatches between adults' and children's speech due to differences in various acoustic correlates on the automatic speech recognition performance under mismatched conditions. ...

    Authors: Shweta Ghai and Rohit Sinha
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:318785
  17. Human communication about entities and events is primarily linguistic in nature. While visual representations of information are shown to be highly effective as well, relatively little is known about the commu...

    Authors: Xiaojuan Ma, Christiane Fellbaum and Perry Cook
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:404860
  18. The paper considers the task of recognizing phonemes and words from a singing input by using a phonetic hidden Markov model recognizer. The system is targeted to both monophonic singing and singing in polyphon...

    Authors: Annamaria Mesaros and Tuomas Virtanen
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:546047
  19. With ageing, human voices undergo several changes which are typically characterized by increased hoarseness and changes in articulation patterns. In this study, we have examined the effect on Automatic Speech ...

    Authors: Ravichander Vipperla, Steve Renals and Joe Frankel
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:525783
  20. We revisit an original concept of speech coding in which the signal is separated into the carrier modulated by the signal envelope. A recently developed technique, called frequency-domain linear prediction (FD...

    Authors: Petr Motlicek, Sriram Ganapathy, Hynek Hermansky and Harinath Garudadri
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:856280
  21. Spoken utterance retrieval was largely studied in the last decades, with the purpose of indexing large audio databases or of detecting keywords in continuous speech streams. While the indexing of closed corpor...

    Authors: Mickael Rouvier, Georges Linarès and Benjamin Lecouteux
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:326578
  22. Breathy and whispery voices are nonmodal phonations produced by an air escape through the glottis and may carry important linguistic or paralinguistic information (intentions, attitudes, and emotions), dependi...

    Authors: CarlosToshinori Ishi, Hiroshi Ishiguro and Norihiro Hagita
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:528193
  23. The automatic recognition of children's speech is well known to be a challenge, and so is the influence of affect that is believed to downgrade performance of a speech recogniser. In this contribution, we inve...

    Authors: Stefan Steidl, Anton Batliner, Dino Seppi and Björn Schuller
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2010:783954
  24. Fractional Fourier transform (FrFT) has been proposed to improve the time-frequency resolution in signal analysis and processing. However, selecting the FrFT transform order for the proper analysis of multicom...

    Authors: Hui Yin, Climent Nadeu and Volker Hohmann
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2009:304579
  25. This paper proposes a query by example system for generic audio. We estimate the similarity of the example signal and the samples in the queried database by calculating the distance between the probability den...

    Authors: Marko Helén and Tuomas Virtanen
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2010:179303
  26. This paper proposes a method for transcribing drums from polyphonic music using a network of connected hidden Markov models (HMMs). The task is to detect the temporal locations of unpitched percussive sounds (...

    Authors: Jouni Paulus and Anssi Klapuri
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:497292
  27. We are developing a method of Web-based unsupervised language model adaptation for recognition of spoken documents. The proposed method chooses keywords from the preliminary recognition result and retrieves We...

    Authors: Akinori Ito, Yasutomo Kajiura, Motoyuki Suzuki and Shozo Makino
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:140575
  28. Speech recognition applications are known to require a significant amount of resources. However, embedded speech recognition only authorizes few KB of memory, few MIPS, and small amount of training data. In or...

    Authors: Christophe Lévy, Georges Linarès and Jean-François Bonastre
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:806186
  29. This paper describes SynFace, a supportive technology that aims at enhancing audio-based spoken communication in adverse acoustic conditions by providing the missing visual information in the form of an animat...

    Authors: Giampiero Salvi, Jonas Beskow, Samer Al Moubayed and Björn Granström
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:191940
  30. We describe here the control, shape and appearance models that are built using an original photogrammetric method to capture characteristics of speaker-specific facial articulation, anatomy, and texture. Two o...

    Authors: Gérard Bailly, Oxana Govokhina, Frédéric Elisei and Gaspard Breton
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:769494
  31. We describe a method for the synthesis of visual speech movements using a hybrid unit selection/model-based approach. Speech lip movements are captured using a 3D stereo face capture system and split up into p...

    Authors: JamesD Edge, Adrian Hilton and Philip Jackson
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:597267
  32. Computer-Assisted Language Learning (CALL) applications for improving the oral skills of low-proficient learners have to cope with non-native speech that is particularly challenging. Since unconstrained non-na...

    Authors: Joost van Doremalen, Catia Cucchiarini and Helmer Strik
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2010:973954
  33. Robust recognition of general audio events constitutes a topic of intensive research in the signal processing community. This work presents an efficient methodology for acoustic surveillance of atypical situat...

    Authors: Stavros Ntalampiras, Ilyas Potamitis and Nikos Fakotakis
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:594103
  34. Wireless-VoIP communications introduce perceptual degradations that are not present with traditional VoIP communications. This paper investigates the effects of such degradations on the performance of three st...

    Authors: TiagoH Falk and Wai-Yip Chan
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:104382
  35. This paper presents an image-based talking head system, which includes two parts: analysis and synthesis. The audiovisual analysis part creates a face model of a recorded human subject, which is composed of a ...

    Authors: Kang Liu and Joern Ostermann
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:174192
  36. Audiovisual text-to-speech systems convert a written text into an audiovisual speech signal. Typically, the visual mode of the synthetic speech is synthesized separately from the audio, the latter being either...

    Authors: Wesley Mattheyses, Lukas Latacz and Werner Verhelst
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:169819
  37. The paper presents an adaptive system for Voiced/Unvoiced (V/UV) speech detection in the presence of background noise. Genetic algorithms were used to select the features that offer the best V/UV detection acc...

    Authors: F Beritelli, S Casale, A Russo and S Serrano
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:965436
  38. Design and implementation strategies of spatial sound rendering are investigated in this paper for automotive scenarios. Six design methods are implemented for various rendering modes with different number of ...

    Authors: MingsianR Bai and Jhih-Ren Hong
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:876297
  39. In patients suffering from head and neck cancer, speech intelligibility is often restricted. For assessment and outcome measurements, automatic speech recognition systems have previously been shown to be appro...

    Authors: Andreas Maier, Tino Haderlein, Florian Stelzle, Elmar Nöth, Emeka Nkenke, Frank Rosanowski, Anne Schützenberger and Maria Schuster
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2010:926951
  40. The problem of overlapping harmonics is particularly acute in musical sound separation and has not been addressed adequately. We propose a monaural system based on binary time-frequency masking with an emphasi...

    Authors: Yipeng Li and DeLiang Wang
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:130567
  41. Temporally localized distortions account for the highest variance in subjective evaluation of coded speech signals (Sen (2001) and Hall (2001). The ability to discern and decompose perceptually relevant tempor...

    Authors: Wenliang Lu and D Sen
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:865723
  42. The problem of tracking multiple intermittently speaking speakers is difficult as some distinct problems must be addressed. The number of active speakers must be estimated, these active speakers must be identi...

    Authors: Angela Quinlan, Mitsuru Kawamoto, Yosuke Matsusaka, Hideki Asoh and Futoshi Asano
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:673202
  43. Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequ...

    Authors: Hyunsin Park, Tetsuya Takiguchi and Yasuo Ariki
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:690451
  44. There are many ways of synthesizing sound on a computer. The method that we consider, called a mass-spring system, synthesizes sound by simulating the vibrations of a network of interconnected masses, springs, an...

    Authors: Don Morgan and Sanzheng Qiao
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:947823
  45. In 2003 and 2004, the ISO/IEC MPEG standardization committee added two amendments to their MPEG-4 audio coding standard. These amendments concern parametric coding techniques and encompass Spectral Band Replic...

    Authors: AC den Brinker, J Breebaart, P Ekstrand, J Engdegård, F Henn, K Kjörling, W Oomen and H Purnhagen
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:468971
  46. Performance of speech recognition systems strongly degrades in the presence of background noise, like the driving noise inside a car. In contrast to existing works, we aim to improve noise robustness focusing ...

    Authors: Björn Schuller, Martin Wöllmer, Tobias Moosmayr and Gerhard Rigoll
    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:942617

Who reads the journal?

Learn more about the impact the EURASIP Journal on Audio, Speech, and Music Processing has worldwide

Annual Journal Metrics

Funding your APC

​​​​​​​Open access funding and policy support by SpringerOpen​​

​​​​We offer a free open access support service to make it easier for you to discover and apply for article-processing charge (APC) funding. Learn more here