Skip to main content

Efficiency of chosen speech descriptors in relation to emotion recognition

Abstract

This research paper presents parametrization of emotional speech using a pool of common features utilized in emotion recognition such as fundamental frequency, formants, energy, MFCC, PLP, and LPC coefficients. The pool is additionally expanded by perceptual coefficients such as BFCC, HFCC, RPLP, and RASTA PLP, which are used in speech recognition, but not applied in emotion detection. The main contribution of this work is the comparison of the accuracy performance of emotion detection for each feature type based on the results provided by both k-NN and SVM algorithms with 10-fold cross-validation. Analysis was performed on two different Polish emotional speech databases: voice performances by professional actors in comparison with the author’s spontaneous speech.

1 Introduction

Emotion recognition methods utilize various input types, i.e., facial expressions [1], speech, gestures, and body language [2], and physical signals such as electrocardiogram (ECG), electromyography (EMG), electrodermal activity, skin temperature, galvanic resistance, blood volume pulse (BVP), and respiration [3]. Speech is most accessible from the aforementioned signals. Therefore, much research in the field of emotion recognition is focused on human voice.

According to Plutchik’s theory [4], there are eight primary bipolar emotions: joy versus sadness, anger versus fear, trust versus disgust, and surprise versus anticipation. These emotions are biologically primitive and have evolved in order to increase the reproductive fitness. Primary emotions can be expressed at different intensities and can be mixed with one another to form different emotional states. This translates to the perception of natural emotions, which is a complex and subjective process. Recognizing several different emotional states in a given situation is very common.

Initially, research on emotion recognition was mostly conducted using acted-out speech which carried undisturbed and clear singular emotion expressions [5]. In 2009, at the Affective Computing and Intelligent Interaction (ACII) conference, a session was held dedicated to emotion recognition from ambiguous samples (containing a mixture of emotions). This started a new wave in the field of emotion recognition in which the researchers abandoned acted-out speech in favor of spontaneous speech [6]. In line with that, this article describes a database of Polish emotional speech extracted from natural discussions in TV programs. The database consists of over 784 samples divided into seven sets representing primary emotional states, although based on Plutchik’s wheel, which presents eight basic emotions, the psychologists who were involved in labeling the data did not label any of the audio signals as “trust.” Hence, in this work, only seven basic emotions, namely joy, fear, surprise, sadness, disgust, anger, and anticipation, are used. Moreover, for comparative purpose, emotions performed by professional actors were analyzed.

Emotion recognition from speech is a pattern recognition problem. Therefore, standard pattern recognition methodologies, which involve feature extraction and classification, are used to solve the task [7]. The number of speech descriptors that are being taken into consideration is still increasing. Mostly acoustic and prosodic features from the set of Interspeech 2009 Challenge [8] are utilized. Therefore, fundamental frequency, formants, energy, Mel Frequency Cepstral Coefficients (MFCC), or Linear Prediction Coefficients (LPC) are widely explored. Nevertheless, the search for new speech features is ongoing.

This research is conducted using a pool of commonly used features utilized in emotion recognition, such as fundamental frequency f 0, formants, energy, MFCC, Perceptual Linear Prediction (PLP), and LPC coefficients. The pool is additionally expanded by perceptual coefficients, such as Bark Frequency Cepstral Coefficients (BFCC), Human Factor Cepstral Coefficients (HFCC), Revised Perceptual Linear Prediction (RPLP), and RASTA Perceptual Linear Prediction (RASTA PLP). The main contribution of this work is to test abovementioned perceptual features, which are applied in speech recognition research but omitted in emotion recognition. All feature sets were tested separately in order to demonstrate their impact on emotion recognition. The verification of feature set efficiency was carried out using both k-NN and Multi Class Support Vector Machine (SVM) [9] with radial kernel classifiers applying 10-fold cross-validation on two independent speech corpora.

The outline of this paper is as follows. Section 2 describes the emotional speech database: structure of the corpora, methods for selecting of the recording source, and the process of emotional speech labeling. Section 3 introduces the examined speech descriptors. Section 4 presents obtained results. Finally, Sections 5 and 6 concludes and summarizes the paper.

2 Emotional speech corpora

Emotional speech samples can be divided into three categories, taking into account their source: spontaneous, invoked, and acted or simulated emotions. The first type can be obtained by recording speakers in natural situations or using TV programs such as talk shows, reality shows, or various types of live coverage. This type of material might not always be of satisfactory quality (background noise, artifacts, overlapping voices, etc.) and may obscure the exact nature of recorded emotions. Moreover, collections of spontaneous speech must be evaluated by human decision makers to determine the gathered emotional states.

Another method of sample acquisition is provoking an emotional reaction by using drugs or staged situations. Appropriate states are induced using imaging methods (videos, images), stories, or computer games. This type of recording is preferred by psychologists, although the method cannot provide desirable effects as reaction to the same stimuli may differ. Similar to spontaneous speech recordings, triggered emotional samples should be subjected to a process of identification by independent listeners.

The third source of emotional speech is acted-out samples. Speakers can be both actors as well as unqualified volunteers. This type of material is usually comprised of high-quality recordings, with clear undistorted emotion expression. Furthermore, the ease of acquiring recordings opens a possibility of obtaining several utterances, representing different emotional states from a single user. However, the acoustic characteristics of such an utterance may be exaggerated, while more subtle features are completely ignored.

2.1 Polish spontaneous speech database

Based on Robert Plutchik’s theory, a corpus not only consists of primary emotions with the addition of complex emotional states but also consists of a much wider range than the commonly used databases [10]. The first step was to gather audio samples containing the emotional carrier of basic states from Plutchik’s wheel of emotions: joy, sadness, anger, fear, disgust, surprise, and anticipation. All samples were assessed by a large group of human evaluators (experts and volunteers) and labeled into the abovementioned classes of emotions, summarized in Table 1.

Table 1 Structure of the corpus

Statistical analysis is an integral part of creating an emotional speech corpus. It should meet certain criteria. One of them is to preserve the distribution of the parameters (characteristics) of the subject of research relevant to the application, which affects its reliability. The set, presented in this study, is a collection of emotional expressions in Polish. Spatial extent, time, place, and personal characteristics of the speaker are not restricted.

The selection of a representative sample recordings is one of the key elements affecting the research credibility. It is assumed that a sample is representative when all the values which could affect the test results are present. Because the process of emotion expression is subjective, depending primarily on gender and age, these variables are taken into account in the process of the corpora creation. In order to retain the right proportions of these variables (almost equal), the abovementioned information is one of the guidelines used when selecting speech sources. This assumption is largely limited due to a lack of personal data of the speakers in recordings obtained from radio auditions.

The most important feature of the samples is the authenticity of presented emotional states, which narrows the search area. The authors focus mainly on the materials from live shows and programs such as reality show. The reactions and feelings presented by the participants of such programs seem spontaneous and provoked by events and discussions. For example, shows presenting political and social problems (e.g., Państwo w Państwie by Polsat TV) contain a large number of anger displays. The assumption of authenticity of emotions could be false and is associated with the subjective evaluation performed by the authors and volunteers involved in the labeling process. What is important to mention is the fact that the collected recordings often contain background noise, which also might have affected the assessment.

The emotional state of the speaker can be identified based on short utterances such as Yes or No [11]. Thus, short sentences, or even single words, are suitable for emotional analysis. Occasionally, additional sounds such as screaming, squealing, laughing, or crying carry the information about the speaker’s emotional state. Therefore, in addition to full spoken words, such sounds which occur in everyday communication are featured in the created corpora.

In addition, a neutral speech model (without emotional coloring) is created for the purposes of emotional research. It is composed of statements from [12] and supplemented with statements of journalists commenting on various events. Such utterances are usually neutral and do not carry any emotional load. This model consists of 235 statements and is not subjected to labeling by volunteers.

The labeling process is divided into two parts. First, the recordings are divided into seven groups (basic emotions). This process is conducted by the authors and students of the faculty of psychology from the University of Lodz. The division is performed with the use of video material which allows access not only to voice and semantics but also to the visual display of emotions, such as gestures or facial expressions. In the second part of the process, the volunteers label the samples based on audio input only. This emphasizes how subjective the perception of emotions really is.

Listening to pre-qualified samples is performed to test whether the listener is able to identify the emotional content of the recording. The volunteer group consists of 15 people, both male and female, aged 21 to 58 years with no hearing disabilities. The task is to assess the recordings and classify them into the groups of seven basic emotions. All listeners are presented with a random set of samples that consists of at least half of each pre-qualified basic emotion recording. The evaluators listen to audio samples one by one, and each assessment is recorded in the database. Every sample could be replayed a number of times before the final decision, but after the classification, it is not possible to return to the recording.

Average recognition amounted to 82.66% in the range of 63 to 93%. However, it should be noted that the pre-qualified samples rated by the authors and students of psychology are the base of the classification and that assessment is also subjective. Therefore, the samples which repeatedly mismatched the labels of the pre-qualification are incorporated into forming ambiguously defined states. Emotions, assessed identically by at least 10 people, are classified as pure prototype states. The database can be made available upon request, for research purposes only.

2.2 Polish acted speech database

The Polish acted emotional speech is made available by the Medical Electronics Division, Technical University of Lodz. This database consists of 240 sentences uttered by eight speakers (four males and four females). Recordings for every speaker were made during a single session. Each speaker utters five different sentences with six types of emotional load: joy, boredom, fear, anger, sadness, and neutral (no emotion). Recordings were taken in the aula of the Polish National Film Television and Theater School in Lodz.

Methodology of inducing a particular emotional attitude follows recommendations [13]: the uttering of each database sentence (sentences have no particular emotional meaning) is preceded by uttering a sentence with a clear emotional connotation, relevant for the current recording.

To assess a quality of the database material, the recordings are evaluated by 50 subjects, through a procedure of classification of 60 randomly generated samples (10 samples per particular emotion). Listeners are asked to classify each utterance into emotional categories. An average rate of correct recognition for this evaluation experiment is 72% (ranging from 60 to 84% for different subjects) [12].

3 Methods

3.1 Prosodies

F 0 is the frequency of vocal folds. It is responsible for the scale of the human voice and accent. It plays an important role in the intonation, which has a significant impact on the nature of the speech. F 0 changes during articulation. The rate of those changes depends on the speaker’s intended intonation [14]. There are many methods to determine the fundamental frequency. In this paper, f 0 is extracted using the autocorrelation method. The analysis window is set to 20 ms with 50% overlap. It is difficult to objectively assess the behavior of f 0 based on the chart. Therefore, statistical parameters related to f 0 are extracted.

Formant frequencies are the frequencies at which the local maxima of the speech signal spectrum envelope occur. They are the properties of the vocal tract. Based on this, it is possible to determine who the speaker is and about what and how he/she is speaking [15]. In practice, applications from three to five formants are used. In this paper, three formant frequencies are estimated. On their basis, parameters such as mean, median, standard deviation, maximum, and minimum are determined. A total of 15 features are extracted.

Speech signal energy, which refers to the volume or intensity of speech, also provides information that can be used to distinguish emotions (i.e., joy and anger increase energy levels in comparison to other emotional states).

3.2 Spectral coefficients

The perceptual approach is based on frequency conversion, corresponding to the subjective reception of the human hearing system. For this purpose, the perceptual scales such as Mel or Bark are used. In this paper, Mel Frequency Cepstral Coefficients (MFCC) [16], Human Factor Cepstral Coefficients (HFCC) [17], Bark Frequency Cepstral Coefficients (BFCC) [18], Perceptual Linear Prediction(PLP) [19], RASTA Perceptual Linear Prediction (RASTA PLP) [20], and Revised Perceptual Linear Prediction (RPLP) [18] coefficients are taken into consideration. The entire scheme for perceptual feature extraction is shown in Fig. 1.

Fig. 1
figure 1

Feature extraction process for different methods

4 Efficiency of features

The verification of the efficiency of feature subsets is carried out using k-NN and SVM classifiers applying 10-fold cross-validation on two independent speech corpora. This method allows to correctly evaluate descriptor efficiency. It is based on a random division of a whole set into 10 subsets of equal size. Then, a single subset is used as the test set, and the rest acts as the training set. This process is repeated 10 times, so that every subset is used as a test set. The final result is achieved by calculating the average of the results of each iteration. In this way, dominating features are distinguished. Table 2 presents the efficiency of commonly used feature subsets. In the course of research, the value of k was selected to achieve the highest classification results.

Table 2 Average recognition results [%] of commonly used features subsets: acted speech (AS) and natural speech (NS)

4.1 Perceptual coefficients

The next step of analysis includes a detailed comparison of perceptual coefficient efficiency. As in the previous step, the classification is carried out using the k-NN and SVM classifiers applying 10-fold cross-validation. The number of perceptual coefficients giving maximum results depends on the type of examined features; it was selected in order to achieve the highest classification results. The value of k in case of k-NN algorithm is experimentally chosen in order to give the highest classification results for a given group. For each signal frame, proper coefficients are obtained, and basing on those, statistical features are extracted.

Subsequently, the sets of the aforementioned coefficients are expanded by their dynamic parameters. Classification efficiency with a various combination of these parameters is shown in Table 3. In both cases, none of dynamic parameters provide an increase in recognition rate.

Table 3 Classification efficiency using k-NN and SVM [%] with various combinations of dynamic parameters for acted speech (AS) and natural speech (NS)

It can be noticed that perceptual coefficients provide much higher recognition results than previously tested features. The best results in both corpora are obtained by using hybrid coefficients. In case of k-NN, the highest recognition was achieved for BFCC, 56.5% for acted speech and 74.5% for natural speech. Generally, the accuracy performance for SVM is much lower. However, the case of k-NN proves the validity of hybrid coefficient application. The best results were obtained using RPLP coefficients 43.88% for acted speech and 59.4% for neutral speech using BFCC coefficients.

4.2 Selection

Selection is conducted on specific subsets to improve the efficiency of classification. For the purpose of this research, two different feature selection techniques were chosen. The first is the Fisher-Markov Selector (FMS) [21], which is independent of the classifier. The second, wrapper method, is a classifier dependent selection—Sequential Forward Selection (SFS) [22]. The results of experiments for both corpora are presented in Table 8.

4.3 Comparison of emotion recognition quality utilizing feature selection, applied on feature subsets

Tables 4 and 5 present the results of emotion recognition, after applying SFS and FMS methods on feature subsets. Additionally, both methods are compared with feature extraction S-PCA [23]. The results are presented for both corpora with division into subsets of attributes.

Table 4 Accuracy performance of emotion recognition [%] without selection (–) in comparison with two different selection methods (FMS and SFS) and extraction (S-PCA) methods using k-NN algorithm as a classifier
Table 5 Accuracy performance of emotion recognition [%] without selection (–) in comparison with two different selection methods (FMS and SFS) and extraction (S-PCA) methods using SVM algorithm as a classifier

Analyzing results obtained with k-NN algorithm, in most cases, one can see a recognition rate improvement after using the SFS or FMS selection methods. For SFS, the exceptions are LPC in the case of the acted speech database, and PLP and RASTA PLP in the case of the spontaneous speech database. The recognition rates achieved for these attributes, after applying SFS, remained unchanged. An improvement of recognition can be also observed after applying FMS, with the exception of f 1f 3 and LPC in the case of acted speech and BFCC and PLP in the case of the spontaneous speech database. The recognition rates achieved for these attributes, after applying FMS, remained unchanged.

After applying feature extraction (S-PCA) for both corpora, in most cases, a slight decrease of recognition rate could be observed in comparison to results obtained without any selection. Moreover, the results never reach values higher than ones obtained using one of the selection methods.

In case of SVM, the recognition rate is improved by using the SFS and FCBS. The exceptions for SFS are f 0, f 1f 3, energy, and PLP in the case of the acted speech database and, in the case of the spontaneous speech database, f 0 and energy. The recognition for these attributes remained unchanged after selection. For FMS, the exceptions are f 0, energy, PLP, and RASTA PLP in the case of the acted speech database and, in the case of the spontaneous speech database, f 0, BFCC, MFCC, HFCC, and RPLP. The recognition for these attributes remained unchanged after selection. Using FMS resulted in slight decrease of the recognition rate in case of f 1f 3 in the case of the acted speech database.

In contrast to k-NN, after applying feature extraction for both corpora, in most cases, a huge increase of recognition rate could be observed in comparison to results obtained with and without both selection methods. Significantly higher results can be seen for spontaneous speech database.

The best results are achieved for the subset containing BFCC coefficients (81.7% for with S-PCA). The lowest results are obtained in the case of formants and PLP coefficients: 17.29% for SVM with SFS and 22.36% for SVM with SFS, respectively.

In the case of acted speech corpora, the highest results are achieved for BFCC: 64.4% (k-NN with SFS and FMS), and the lowest results are achieved by using fundamental frequency and PLP coefficients: 19.66% (SVM with SFS/FMS) and 22.47% (SVM with SFS/FMS), respectively.

4.4 Comparison of emotion recognition quality utilizing feature selection, applied on combined feature set

Tables 6 and 7 present results of emotion recognition conducted on a feature set comprised of all subsets presented in Table 4 or 5, for acted and natural speech accordingly. All selection and extraction methods were applied exactly as in the previous section. Additionally, both feature selection and extraction are applied on the combined feature set (SFS + S-PCA, FMS + S-PCA).

Table 6 Accuracy performance of emotion recognition [%] for acted speech
Table 7 Accuracy performance of emotion recognition [%] for natural speech

For the acted database, the best results for acted speech were obtained for k-NN with FMS (65.8%). The lowest accuracy rate was achieved for SVM, 16.87% for the whole feature set and after applying SFS.

The highest accuracy performance for natural speech was achieved with SVM. After applying both selection and extraction (FMS + S-PCA), it reached 83.95%. Similar to the previous database, the lowest results were obtained for SVM with SFS and for the whole feature set.

Applying selection reduced the feature set size from initial 448 attributes to 57 (for SFS) and 99 (for FMS), for the acted speech corpora. For the spontaneous database, the values decreased to 88 (SFS) and 90 (FMS) from the initial 473. Distribution of each feature subset, after applying selection to the whole feature set, is presented in Fig. 2.

Fig. 2
figure 2

Distribution of feature groups after applying selection on the combined feature set (up acted speech, down natural speech, left SFS, right FMS)

5 Discussion

Expression of emotion generally depends on the speaker, the culture, and environment [24]. In order to reduce those factors, both emotional speech databases used in this research contain utterances in Polish, performed by native Polish speakers.

One can notice significantly lower recognition results for the acted speech database. This is the result of the different contents of the two databases. In case of acted speech corpora, the relatively low number of speakers, in comparison to the sample count, could affect performance of the classifiers. Moreover, the contents of the utterances were the same throughout different emotional states; this is an advantage if one wants to ensure that human judgment on the perceived emotion is solely based on the emotional content [25]; however, in case of an automated recognition system, the line between emotion and speech recognition might be blurred, specially in this case, where the tested features are commonly used in speech recognition tasks.

The natural speech corpus contents were selected in order to ensure a proper number and variety of samples which considerably increased the recognition rates. The high number of speakers, as well as different contents of utterances, guaranteed that the extracted features carried information about emotion and not just about the speech or the speaker. Moreover, the higher number of samples in the natural speech database had a reflection in classification results.

For both corpora, similar discrepancies were observed for classification performed by human deciders. Average human recognition rate for natural and acted speech amounted to 81 and 72%, respectively.

The most important issue in emotion speech recognition is the extraction of discriminative features that efficiently characterize different emotional state. It is believed that a proper selection of features significantly affects the classification performance.

After applying SFS and FMS on the combined feature set for both databases, presented in Fig. 2, one can notice that the majority of selected features come from the RPLP, BFCC, and HFCC subsets, with a supplement from RASTA PLP, LPC, and MFCC groups. This correlates with the recognition results for each individual group of features, presented in Tables 4 and 5. Accuracy performance achieved in this research justifies adopting those features for the purpose of emotion recognition.

What is also worth mentioning is the fact that none of the dynamic features, presented in Table 3, were included in the feature sets obtained after applying both selection methods on combined feature groups. They were also excluded after executing selection on isolated feature subsets, as shown in Table 8. The addition of dynamic features, even to feature subsets, did not improve classification results; in fact, in most cases, it reduced the accuracy. This behavior might be caused by a great increase of feature space, where the dynamic features are either nondescriptive or redundant and act more as noise than a carrier of emotional information.

Table 8 Features after subset selection SFS and FMS for both corpora: acted AS and natural NS speech

Selecting an appropriate feature reduction method is a crucial step in the recognition process; however, feature selection and classification is highly dependant on the data and feature types [26]. In Tables 6 and 7, one can observe that applying SFS had none or very little effect on the quality of classification. From the two selection methods, presented in this paper, the classifier independent FMS proved to be superior in comparison to SFS. The highest classification for k-NN were achieved with FMS. Additionally, S-PCA method was used in order to compare results of feature extraction and selection. For SVM, applying S-PCA improved the results far better than the selection methods. However, the best results for SVM were achieved after applying S-PCA on sets of already selected features, as shown in Tables 6 and 7. Using selection first with extraction as a second step helped to reduce the noise from non-discriminant features. Again, FMS gave better results than SFS when combined with S-PCA.

6 Conclusions

For the purpose of the examination, a Polish spontaneous emotion database was created. It consists of over 700 samples divided into seven sets representing primary emotional states. Moreover, for comparative aims, we analyzed emotions performed in Polish, by professional actors.

The main objective of this work is to test the efficiency of perceptual features, used in speech recognition (BFCC, HFCC, RPLP, and RASTA PLP), in emotion recognition. As this research has shown, these features proved to be highly discriminative which justifies their application in emotion recognition.

References

  1. R. El Kaliouby, P. Robinson, in Systems, Man and Cybernetics, 2004 IEEE International Conference on, 1. Mind reading machines: Automated inference of cognitive mental states from video (IEEE, 2004), pp. 682–688.

  2. P. R. De Silva, A. P. Madurapperuma, A. Marasinghe, M. Osano: 2006, in In Pattern Recognition, 2006. ICPR 2006. 18th International Conference on, 1. A multi-agent based interactive system towards child’s emotion performances quantified through affective body gestures (IEEE, pp. 1236–1239.

  3. D. Kamińska, A. Pelikant, Recognition of human emotion from a speech signal based on Plutchik’s model. Int. J. Electron. Telecommun.58(2), 165–171 (2012).

    Google Scholar 

  4. R. Plutchik, Emotion: a psychoevolutionary synthesisNew York Harper and Row, 1980).

  5. F. Burkhardt, A. Paeschke, M. Rolfes, W. F. Sendlmeier, B. Weiss, in Interspeech, 5. A database of German emotional speech, (2005), pp. 1517–1520.

  6. D. Kamińska, T. Sapiśki, A. Pelikant, in Signal Processing Symposium (SPS), 2013. Recognition of emotional states in natural speech (IEEE, 2013), pp. 1–4.

  7. K. Ślot, Rozpoznawanie Biometryczne Nowe Metody Ilościowej Reprezentacji Obiektów (Wydawnictwa Komunikacji i Ła̧czności, 2010).

  8. B. W. Schuller, S. Steidl, A. Batliner, in Interspeech, 2009. The INTERSPEECH 2009 emotion challenge, (2009), pp. 312–315.

  9. K. Crammer, Y. Singer, On the algorithmic implementation of multi-class SVMs. JMLR. 2:, 265–292 (2001).

    MATH  Google Scholar 

  10. D. Ververidis, C. Kotropoulos, in Proc. Panhellenic Conference on Informatics (PCI). A review of emotional speech databases (Thessaloniki, Greece, 2003), pp. 560–574.

  11. G. Klasmeyer, Emotions in Speech. Institut fur Kommunikationswissenschaft, Technical University of Berlin (1995).

  12. J. Cichosz. Database of polish emotional speech. http//www.eletel.p.lodz.pl/med/eng/. Access Dec 2016.

  13. S. Mozziconacci, D. Hermes, in Proceedings of ICPhS99. Role of intonation patterns in conveying emotion in speech (ICPhS, 1999), pp. 2001–2004.

  14. C. Busso, S. S. Narayanan, Analysis of emotionally salient aspects of funda- mental frequency for emotion detection. IEEE Int. Conf. Acoust. Speech Signal Process. 17(9), 582–596 (2009).

    Google Scholar 

  15. A. Obrȩbowski, Narza̧d Gļosu i Jego Znaczenie W Komunikacji Spoļecznej (Uniwersytet Medyczny im. Karola Marcinkowskiego w Poznaniu, 2008).

  16. T. Zieliński, Cyfrowe przetwarzanie sygnałów (Wydawnictwa Komunikacji i Ła̧czności, 2013).

  17. M. D. Skowronski, J. G. Harris, in Acoustics, Speech, and Signal Processing (ICASSP), 2002 IEEE International Conference on, 1. Increased MFCC filter bandwidth for noise-robust phoneme recognition (IEEE, 2002), pp. I–801.

  18. P. Kumar, A. Biswas, A. N. Mishra, M. Chandra, Spoken language identification using hybrid feature extraction methods. J. Telecommun.1(2), 11–15 (2010).

    Google Scholar 

  19. H. Hermansky, Perceptual linear predictive (PLP) analysis of speech. J Acoust. Soc. Am. 87(4), 1738–1752 (1989).

    Article  Google Scholar 

  20. H. Hermansky, N. Morgan, RASTA processing of speech. IEEE Trans. Speech Audio Process. 2(4), 578–589 (1990).

    Article  Google Scholar 

  21. Q. Cheng, H. Zhou, J. Cheng, The Fisher-Markov selector: fast selecting maximally separable feature subset for multiclass classification with applications to high-dimensional data. IEEE Trans. Pattern Anal. Mach. Intell.33(6), 1217–1233 (2011).

    Article  Google Scholar 

  22. J. Kittler, Feature set search algorithms. Pattern Recognition Signal Proc. 41:, 60 (1978).

    Google Scholar 

  23. H. Z. T. Hastie, R. Tibshirani, Sparse principal component analysis. J. Comput. Graph. Stat.15(2), 265–286 (2006).

    Article  MathSciNet  Google Scholar 

  24. A. Abelin, Anger or Fear? — Crosscultural multimodal interpretations of emotional expressions. Emot. Human Voice. 1:, 65–75 (2008).

    Google Scholar 

  25. M. Ayadi, M. Kamel, F. Karray, Survey on speech emotion recognition: features, classification schemes, and databases. Pattern Recogn.44:, 572–587 (2011).

    Article  MATH  Google Scholar 

  26. A. Janecek, W. Gansterer, M. Demel, G. Ecker, On the relationship between feature selection and classification accuracy. JMLR. 4:, 90–105 (2008).

    Google Scholar 

Download references

Acknowledgments

This work has been partially supported by the Estonian Research Grant (PUT638), Estonia-Poland (Est-Pol) Research Collaboration Project (MAJoRA: Multimodal Anger and Joy Recognition by Audiovisual Information), and the Estonian Centre of Excellence in IT (EXCITE) funded by the European Regional Development Fund.

Authors’ contributions

DK conceived of the study; participated in the design of the work, data collection, data analysis, interpretation, and coordination; and drafted the manuscript. TS participated in the data collection, data analysis, and interpretation and helped to draft the manuscript. GA participated in the design of the work and critical revision of the article. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dorota Kamińska.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kamińska, D., Sapiński, T. & Anbarjafari, G. Efficiency of chosen speech descriptors in relation to emotion recognition. J AUDIO SPEECH MUSIC PROC. 2017, 3 (2017). https://doi.org/10.1186/s13636-017-0100-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13636-017-0100-x

Keywords