openEAR 2016


Notes:

openEAR is the Munich Open-Source Emotion and Affect Recognition Toolkit developed at the Technische Universität München (TUM). It provides efficient (audio) feature extraction algorithms implemented in C++, classfiers, and pre-trained models on well-known emotion databases. It is now maintained and supported by audEERING.

  • Affect recognition
  • Affective agent
  • Emotion detection
  • Emotion detection system
  • Emotion recognition
  • Talking books

Resources:

Wikipedia:

References:

See also:

100 Best Emotion Recognition Videos | Affective Computing & Dialog Systems 2016 | Emotional Agents 2016


Multimodal Sentiment Analysis Using Deep Neural Networks
H Abburi, R Prasath, M Shrivastava… – … Conference on Mining …, 2016 – Springer
… Speech data which is there in audio file are generally extracted from the vocal track, excitation and prosody. Audio features like pitch, intensity and loudness are extracted using OpenEAR software and Support Vector Machine (SVM) classifier is built to detect the sentiment [12]. …

WISE: Web-based Interactive Speech Emotion Classification.
SE Eskimez, M Sturge-Apple, Z Duan… – SAAIP …, 2016 – researchgate.net
… improve the system. OpenEar is an emotion classification multi-platform soft- ware package that includes libraries for feature extraction written in C++ and pre-trained models as well as scripts to support model building. One of …

Hargreaves’“open-earedness”: A critical discussion and new approach on the concept of musical tolerance and curiosity
C Louven – Musicae Scientiae, 2016 – journals.sagepub.com
Menus.

Intensified Sentiment Analysis of Customer Product Reviews Using Acoustic and Textual Features
S Govindaraj, K Gopalakrishnan – ETRI Journal, 2016 – etrij.etri.re.kr
… In our study, AFs of customer product reviews are extracted using Munich OpenEAR, which is a tool for automatic emotion recognition and feature extraction [6]. We use the following AFs, among others: voice intensity, loudness, energy, fundamental frequency (F0), and Mel …

Generating language distance metrics by language recognition using acoustic features
L Sun, R Hu, H Yu, TJ Sluckin – Wireless Communications & …, 2016 – ieeexplore.ieee.org
… Abstract—A language recognition system is used to build quantitative measure of language distance. The OpenEAR toolkit is used to extract more than 6,000 features per speech sam- ple. … A. Feature Extraction Feature extraction uses the OpenEAR toolkit [22]. …

Noise-Robust Speech Emotion Recognition Using Denoising Autoencoder
HK Ha, NK Kim, WK Seong, HK Kim – Audio Engineering Society …, 2016 – aes.org
… Actually , the DAE tries and the SVM was trained using LibSVM [ 12 ] that was a to learn how to transform the 13 – dimensional features part of openEAR [ 13 ] . … [ 13 ] F . Eyben , M . W llmer , and B Schuller , OpenEAR introducing the Munich open – source [ 3 ] TL . Nwe , SW . …

Discussion and Outlook
F Eyben – Real-time Speech and Music Classification by Large …, 2016 – Springer
… Implementations of these feature sets along with a generic framework for incremental, real-time acoustic feature extraction were published as an open-source toolkit (first released as openEAR (Eyben etáal. 2009a) including affect recognition models (Eyben etáal. …

Spontaneous speech emotion recognition via multiple kernel learning
C Zha, P Yang, X Zhang, L Zhao – Measuring Technology and …, 2016 – ieeexplore.ieee.org
… By using the openEar toolbox [15], we extract 384 dimensional feature vectors for each utterance. … “OpenEAR—introducing the Munich open-source emotion and affect recognition toolkit.” Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. …

Wavelet-Based Time-Frequency Representations for Automatic Recognition of Emotions from Speech
JC Vasquez-Correa, T Arias-Vergara… – … ; 12. ITG Symposium; …, 2016 – ieeexplore.ieee.org
… Note also that in most of the cases the wavelet–based TF representations provide higher UARs than OpenEAR. Table 2: Detection of high vs. low arousal emotions. V: voiced, U: unvoiced. … neg- ative valence emotions. In general, the highest UARs are obtained with OpenEAR. …

SVM Classifier for Emotional Speech Recognition in Software Environment SEBAS
M Miloševi?, Ž Nedeljkovi?, Ž ?urovi? – academia.edu
… reality etc. In recent years are developed few software toolkits which are used in the field of emotional speech recognition such as Opensmile [1], openEAR [2], Hidden Markov Model Toolkit (HTK) [3], Voicebox [4] etc. These …

Augmenting Supervised Emotion Recognition with Rule-Based Decision Model
A Patwardhan, G Knapp – arXiv preprint arXiv:1607.02660, 2016 – arxiv.org
… We used the openEar toolkit [28] for capturing audio data. The data from the various modalities was combined at the decision level. … For the audio modality the openEar toolkit was used to extract the features and the pre- built SVM based classifiers were used for emotion …

Fusing audio, visual and textual clues for sentiment analysis from multimodal content
S Poria, E Cambria, N Howard, GB Huang, A Hussain – Neurocomputing, 2016 – Elsevier
… Audio features were also extracted using a 30 Hz frame-rate and a sliding window of 100 ms. To compute the features, we used the open source software OpenEAR [57]. Specifically … voice. Using openEAR we extracted 6373 features. …

Revisiting the EmotiW challenge: how wild is it really?
M Kächele, M Schels, S Meudt, G Palm… – Journal on Multimodal …, 2016 – Springer
… In order to cover a complete set of audio features, we also incorporated the baseline features that were delivered with the original challenge data [11]. They are based on the openEAR toolkit [15] that computes one high-dimensional vector for each snippet of audio data. …

Domain adaptation for speech emotion recognition by sharing priors between related source and target classes
Q Mao, W Xue, Q Rao, F Zhang… – Acoustics, Speech and …, 2016 – ieeexplore.ieee.org
… per chunk contains 16 × 2 × 12 = 384 attributes. To ensure reproducibility, the open source toolkit openEAR [21] is utilized to extract 384 attributes. 5. EXPERIMENTS 5.1. Experimental Setup For the first stage of our model, we …

SPEAKER-INDEPENDENT SPEECH EMOTION RECOGNITION USING GAUSSIAN AND SVM CLASSIFIERS
S HAQ, A ALI, M ASIF, T JAN, Y KHAN… – … -SURJ (Science Series …, 2016 – sujo.usindh.edu.pk
… The results were compared with the state-of-the-art techniques. 3.1 Feature Extraction We extracted 384 audio features that were used in the Interspeech 2009 Emotion Challenge (Schuller et al., 2009) using the openEAR toolkit (Eyben et al., 2009). …

Audio-visual emotion classification using filter and wrapper feature selection approaches
S Haq, M Asif, A Ali, T Jan, N AHMAD… – … Journal-SURJ (Science …, 2016 – sujo.usindh.edu.pk
… al., 2015). Audio features: The low-level descriptors (LLD) including pitch (f0), intensity, loudness, probability of voicing, 8 line spectral frequencies and zero-crossing rate (ZCR) were extracted using the openEAR toolkit. The …

Aggressive actions and anger detection from multiple modalities using Kinect
A Patwardhan, G Knapp – arXiv preprint arXiv:1607.01076, 2016 – arxiv.org
… We implemented the multimodal emotion recognition system using C#.NET and Kinect API and integrated it with speech based emotion detection library called openEar [Eyben et al. … openEAR – Introducing the Munich Open-Source Emotion and Affect Recognition Toolkit. …

Acoustic Features and Modelling
F Eyben – Real-time Speech and Music Classification by Large …, 2016 – Springer
… www.?music-ir.?org/?mirex/?abstracts/?2010/?ES1.?pdf. F. Eyben, M. Wöllmer, B. Schuller, openEAR—introducing the Munich open-source emotion and affect recognition toolkit, in Proceedings of the 3rd International Conference on Affective Computing and Intelligent …

Improved multimodal sentiment detection using stressed regions of audio
H Abburi, M Shrivastava… – Region 10 Conference …, 2016 – ieeexplore.ieee.org
… Our proposed system is implemented in Spanish database. It is observed that in the literature a lot of work has been done by extracting several features using OPENEAR tool and build a system using the SVM classifier. Instead …

Artificial Neural Network vs. Support Vector Machine For Speech Emotion Recognition
MA Ahmad – Tikrit Journal of Pure Science, 2016 – main.tu-jo.com
… We extract features from each speech sample using OpenEAR toolkit [39]. Each sample is dividing into several frames of equivalent length then calculate (68) LLD as descript in table 2 (a). Delta and double Figure 2: five-emotion class of Berlin dataset. …

Daily life support at home through a virtual support partner
S Hanke, E Sandner, S Kadyrov… – 2016 – IET
… facial expressions are derived using the Noldus’ FaceReader technology [13], from the skeleton in- formation the affective state is derived by analyzing movement features [14] and from sound captured from a microphone the emotion is found using the OpenEAR toolkit [15]. …

Prediction of emotions from text using sentiment analysis for expressive speech synthesis
E Vanmassenhove, JP Cabral, F Haider – 9th ISCA Speech Synthesis …, 2016 – tara.tcd.ie
… We initially used the same feature set that was used in openEAR [19] to recognize emotions in real time. … “OpenEAR in- troducing the Munich open-source emotion and affect recogni- tion toolkit”, Affective Computing and Intelligent Interaction and Workshops, pp. 1–6, 2009. …

Speaker-Independent Speech Emotion Recognition Based Multiple Kernel Learning of Collaborative Representation
C Zha, X Zhang, L Zhao, R Liang – IEICE Transactions on …, 2016 – search.ieice.org
… unvoiced part. The acoustic descrip- tors and the statistical functioanals are detailed in Table 1. Using the openEar toolbox [10], we extract 384 di- mensional feature vectors for the utterance, initial, final and voiced levels. The …

Multi-Stage Recognition of Speech Emotion Using Sequential Forward Feature Selection
T Liogien?, G Tamulevi?ius – Electrical, Control and Communication …, 2016 – degruyter.com
… was applied. Again, the number of folds was limited by the size of datasets. A total of 6552 different speech emotion features were extracted for the emotion recognition experiment using OpenEAR toolkit [19]. The features included …

Croatian Emotional Speech Analyses on a Basis of Acoustic and Linguistic Features
B Dropulji?, S Skansi, R Kopal – 2016 – hrcak.srce.hr
… valence and arousal estimation. ACOUSTIC FEATURES Acoustic features are extracted by using the open-source Emotion and Affect Recog- nition (openEAR) toolkit’s feature extracting backend openSMILE [14]. A total of 1941 …

Emotion Recognition from Speech with Acoustic, Non-Linear and Wavelet-based Features Extracted in Different Acoustic Conditions
JC Vásquez Correa – 2016 – bibliotecadigital.udea.edu.co
… The acoustic analysis considers a standard feature set developed for emotion recognition from speech called OpenEAR, and a set of spectral and noise derived measures. … In [36], a standard toolbox called OpenEAR was presented. Such toolkit compute 5967 measures …

Multimodal Sentiment Analysis of Telugu Songs.
H Abburi, ESA Akkireddy, S Gangashetti… – SAAIP@ IJCAI, 2016 – researchgate.net
… wav format), into 16 bit, 16000 Hz sampling fre- quency and to a mono channel. To extract a set of audio fea- tures like mfcc, chroma, prosody, temporal, spectrum, har- monics and tempo from a wave file openEAR/openSMILE toolkit [Eyben et al., 2010] is used. …

Langauage Technology Research Center IIIT Hyderabad India {harika. abburi, eswarsai. akhil}@ research. iiit. ac. in {svg, radhika. mamidi}@ iiit. ac. in
H Abburi, ESA Akkireddy, SV Gangashetty – 2016 – web2py.iiit.ac.in
… to a mono channel. To extract a set of audio fea- tures like mfcc, chroma, prosody, temporal, spectrum, har- monics and tempo from a wave file openEAR/openSMILE toolkit [Eyben et al., 2010] is used. Brief details about audio …

Comparative study of multi-stage classification scheme for recognition of Lithuanian speech emotions
T Liogien?, G Tamulevi?ius – Computer Science and …, 2016 – ieeexplore.ieee.org
… [12] F. Eyben, M. Wollmer, and B. Schuller, “OpenEAR – Introducing the Munich open-source emotion and affect recognition toolkit,” 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, pp. …

Automatic Forest Wood Logging Identification based on Acoustic Monitoring
I Mporas, M Paraskevas – Proceedings of the 9th Hellenic Conference on …, 2016 – dl.acm.org
… [13] F. Eyben, M. Wollmer, and B. Schuller, “OpenEAR – intro- ducing the Munich open-source emotion and affect recognition toolkit,” In Proc. of the 4th International HUMAINE Association Conference on Affective Computing and Intelligent Interaction (ACII 2009). …

Hierarchical method to classify emotions in speech signals
B Boragolla, FF Farook, H Herath… – … and Automation for …, 2016 – ieeexplore.ieee.org
… “Data-Driven Emotion Conversion In Spoken English”. Speech Communication 51.3 (2009): 268-283. [12] Eyben, F., Wollmer, M. and Schuller, B. OpenEAR-introducing the Munich open-source emotion and affect recognition toolkit. 3rd Intl. Conf. …

Online speaker emotion tracking with a dynamic state transition model
O Cirakman, B Gunsel – Pattern Recognition (ICPR), 2016 23rd …, 2016 – ieeexplore.ieee.org
… TABLE VI. RECALL RATES FOR EXISTING WORK ON EMO-DB Arousal Valence All GerDA [15] 97.6 82.2 79.1 HTK [16] 91.5 78.0 73.2 OpenEAR [16] 96.8 87.0 84.6 SVM-P [6] 95.2 94.3 86.3 both “Neutral” and “Low” arousal categories. …

Classification of bipolar disorder episodes based on analysis of voice and motor activity of patients
A Maxhuni, A Muñoz-Meléndez, V Osmani… – Pervasive and Mobile …, 2016 – Elsevier
There is growing amount of scientific evidence that motor activity is the most consistent indicator of bipolar disorder. Motor activity includes several areas s.

Automatic Speech Feature Learning for Continuous Prediction of Customer Satisfaction in Contact Center Phone Calls
J Arias, J Luque – … in Speech and Language Technologies for …, 2016 – books.google.com
… In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6964–6968, May 2014 7. Eyben, F., Wollmer, M., Schuller, B.: OpenEAR-introducing the Munich open- source emotion and affect recognition toolkit. …

Pervasive and Mobile Computing
A Maxhunia, A Muñoz-Meléndezb, V Osmania… – 2016 – ccc.inaoep.mx
… Algorithms were developed to scrabbled/stretched the actual signal to avoid its original reconstruction while keeping the required properties for analysing the voice. In the current study, we extracted acoustic features from the speech signal using OpenEar [43] and Praat [44]. …

Automatic speech feature learning for continuous prediction of customer satisfaction in contact center phone calls
C Segura, D Balcells, M Umbert, J Arias… – Advances in Speech and …, 2016 – Springer
… In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6964–6968, May 2014. 7. Eyben, F., Wollmer, M., Schuller, B.: OpenEAR – introducing the Munich open-source emotion and affect recognition toolkit. …

Speaker Age Classification and Regression Using i-Vectors.
J Grzybowska, S Kacprzak – INTERSPEECH, 2016 – pdfs.semanticscholar.org
… com/apps/pubs/default.aspx?id=205119 [14] F. Eyben, M. Wllmer, and B. Schuller, “Openear 2014: Intro- ducing the munich open-source emotion and affect recognition toolkit,” in Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. …

Facial expression recognition in video with multiple feature fusion
J Chen, Z Chen, Z Chi, H Fu – IEEE Transactions on Affective …, 2016 – ieeexplore.ieee.org
Page 1. 1949-3045 (c) 2016 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …

Pairwise Decomposition with Deep Neural Networks and Multiscale Kernel Subspace Learning for Acoustic Scene Classification
E Marchi, D Tonelli, X Xu, F Ringeval… – 24th Acoustic Scene …, 2016 – fim.uni-passau.de
… 835–838. [19] F. Eyben, M. Wöllmer, and B. Schuller, “OpenEAR– introducing the Munich open-source emotion and affect recognition toolkit,” in Affective Computing and Intelligent Interaction and Workshops. Amsterdam, The Netherlands: IEEE, 2009, pp. 576–581. …

Inhomogeneous point-processes to instantaneously assess affective haptic perception through heartbeat dynamics information
G Valenza, A Greco, L Citi, M Bianchi, R Barbieri… – Scientific …, 2016 – ncbi.nlm.nih.gov
… their applications. Affective Computing, IEEE Transactions on 1, 18–37 (2010). Eyben F., Wollmer M. & Schuller B. OpenEAR-introducing the Munich open-source emotion and affect recognition toolkit. InAffective Computing …

CHEAVD: a Chinese natural emotional audio–visual database
Y Li, J Tao, L Chao, W Bao, Y Liu – Journal of Ambient Intelligence and …, 2016 – Springer
Page 1. ORIGINAL RESEARCH CHEAVD: a Chinese natural emotional audio–visual database Ya Li1 • Jianhua Tao1,2,3 • Linlin Chao1 • Wei Bao1,4 • Yazhu Liu1,4 Received: 30 March 2016 / Accepted: 22 August 2016 © Springer-Verlag Berlin Heidelberg 2016 …

ARTICLE 3: RATM: RECURRENT ATTENTIVE TRACKING MODEL
SE Kahou, V Michalski… – … WITH DEEP NEURAL …, 2016 – publications.polymtl.ca
Page 101. 82 CHAPTER 7 ARTICLE 3: RATM: RECURRENT ATTENTIVE TRACKING MODEL Submitted to Transactions on Pattern Analysis and Machine Intelligence, April 2016 Samira Ebrahimi Kahou, Vincent Michalski, Roland …

Social networking data analysis tools & challenges
A Sapountzi, KE Psannis – Future Generation Computer Systems, 2016 – Elsevier
Online Social Network’s (OSN) considered a spark that burst the Big Data era. The unfolding of every event, breaking new or trend flows in real time inside OS.

Navigation and geolocation within urban and semi-urban environments using low-rate wireless personal area networks
T Perrin – 2016 – uhra.herts.ac.uk
Page 1. Navigation and geolocation within urban and semi-urban environments using low-rate wireless personal area networks This dissertation is submitted to the University of Hertfordshire in partial fulfilment of the requirements of the degree of MSc by Research. …

Emotion recognition with deep neural networks
SE Kahou – 2016 – search.proquest.com
Emotion recognition with deep neural networks. Abstract. Automatic recognition of human emotion has been studied for decades. It is one of the key components in human computer interaction with applications in health care, education, entertainment and advertisement. …

Insights from social networks: a big data analytics approach.
? ?????????, A Sapountzi – 2016 – dspace.lib.uom.gr
Page 1. Page 2. I Page 3. II ACKNOWLEDGEMENTS This research is the final result of my Master Thesis project to obtain the Master degree of Information Systems at the University of Macedonia in Thessaloniki. I would like to …

An Investigation into Language Model Data Augmentation for Low-Resourced STT and KWS}}
G Huang, TF da Silva, L Lamel, JL Gauvain, A Gorin… – ieeeicassp, 2016 – perso.limsi.fr
LIMSI TLP group publication list starting from 1990. An URL is given for each reference having a PostScript file and an abstract in the Online publication list. @STRING{arpaslt = “Proceedings of ARPA Workshop on Spoken Language …