openEAR 2015


Notes:

openEAR is the Munich Open-Source Emotion and Affect Recognition Toolkit developed at the Technische Universität München (TUM). It provides efficient (audio) feature extraction algorithms implemented in C++, classfiers, and pre-trained models on well-known emotion databases. It is now maintained and supported by audEERING.

  • Talking books

Resources:

  • emotiw2016 .. emotion recognition in the wild challenge

See also:

100 Best Emotion Recognition Videos | Emotional Agents 2013 | Emotional Agents 2014 | Emotional Agents 2015openEAR 2013 | openEAR 2014


Speaker emotional state classification by DPM models with annealed SMC samplers B Gunsel, O Cirakman… – … (EUSIPCO), 2015 23rd …, 2015 – ieeexplore.ieee.org … maNe use of acoustic features which are originally proposed for speech recognition hence they may not fully model the speaNer emotional states [1]. Consequently, a high performance detector could only be achieved by using very large feature sets (ie, openEAR) [2] or … Related articles All 2 versions

A Deep Feature based Multi-kernel Learning Approach for Video Emotion Recognition W Li, F Abtahi, Z Zhu – Proceedings of the 2015 ACM on International …, 2015 – dl.acm.org … In our approach, we extract LBP-TOP-based video features, openEAR energy/spectral-based audio fea- tures, and CNN (convolutional neural network) based deep image features by fine-tuning a pre-trained model with extra emotion images from the web. … Cited by 2 Related articles

Emotion recognition from speech: tools and challenges A Al-Talabani, H Sellahewa… – SPIE Sensing …, 2015 – proceedings.spiedigitallibrary.org … influence its performance. Here, we adopted a set of features extracted by the openEAR software. These features … related literature. Table 1. (33) Low Level Descriptor (LLD) used in Acoustic analysis with OpenEAR. Feature Group Description … Related articles All 4 versions

A new dataset of telephone-based human-human call-center interaction with emotional evaluation I Siegert, K Ohnemus – Proc. of the First International Symposium …, 2015 – researchgate.net … A quite prominent set of features is used proposed by Eyben et al. in the context of the openEAR project (cf. … 801–804. Pittsburgh, USA (2006) 5. Eyben, F., Wöllmer, M., Schuller, B.: Openear – introducing the munich open- source emotion and affect recognition toolkit. In: Proc. … Cited by 3 Related articles

SFS feature selection technique for multistage emotion recognition T Liogien?, G Tamulevi?ius – Information, Electronic and …, 2015 – ieeexplore.ieee.org … Considering the limited number of patterns 3-fold cross-validation scheme was applied in order to get proper evaluation of the performance. 6552 different speech emotion features were extracted for this study using OpenEAR toolkit [19]. … Related articles

Analysis of excitation source features of speech for emotion recognition SR Kadiri, P Gangamohan… – INTERSPEECH. …, 2015 – researchgate.net … segments of speech signal [1, 10, 11, 12]. For example, large number of features (brute-force approach) are extracted using open-source toolkit called OpenEAR [1], [13]. Emotions are modeled using discriminative/non-discriminative … Cited by 2 Related articles All 2 versions

Emotion recognition from speech under environmental noise conditions using wavelet decomposition JC Vásquez-Correa, N García… – Security Technology …, 2015 – ieeexplore.ieee.org … databases. The table also includes the accuracy obtained using the openEAR toolkit [15], a tool specially designed for the characterization of emotional speech, in order to compare the results with the state of the art methods. … Cited by 1 Related articles

On the importance of subtext in recommender systems P Grasch, A Felfernig – icom, 2015 – degruyter.com … SPEECHREC uses a version of the Simon 2 2 speech recognition system that was modified to include the arousal score calculated by openEAR as described in Section 3.2.2. Simon in turn uses the PocketSphinx decoder of the CMU SPHINX speech recognition framework 3 3 … Cited by 2 Related articles All 4 versions

A Proposed Approach with Analysis of Speech Signals for Sentiment Detection A Tyagi, N Chandra – Communication Systems and Network …, 2015 – ieeexplore.ieee.org … Some of the tools are like: Speech Analyzer, OpenSmile/OpenEar etc. Speech Analyzer as the name suggest required for analyzing speech or voice of an individual based on its pitch. … Another tool which could also be used for the purpose of this task is OpenSmile/OpenEar. … Related articles

Video and image based emotion recognition challenges in the wild: Emotiw 2015 A Dhall, OV Ramana Murthy, R Goecke… – Proceedings of the …, 2015 – dl.acm.org … The features are extracted using the open-source Emotion and Affect Recog- nition (openEAR) [7] toolkit backend openSMILE [8]. The feature set consists of 34 energy & spectral related low-level descriptors (LLD) × 21 functionals, 4 voicing related LLD × 19 functionals, 34 … Cited by 36 Related articles All 3 versions

Towards an intelligent framework for multimodal affective data analysis S Poria, E Cambria, A Hussain, GB Huang – Neural Networks, 2015 – Elsevier An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of. Cited by 44 Related articles All 16 versions

Towards a Conversational Expert System for Rhetorical and Vocal Quality Assessment in Call Center Talks M Walther, B Neuber, O Jokisch, T Mellouli – 2015 – slate2015.org … The features were extracted with openEAR [11] based on the configuration file from the ”Interspeech Paralinguistic Chal- lenge 2010” [12]. … [11] F. Eyben, M. Wöllmer, and B. Schuller, “openear – introducing the munich open-source emotion and affect recognition toolkit,” in Proc. … Related articles All 2 versions

Recurrent neural networks for emotion recognition in video S Ebrahimi Kahou, V Michalski, K Konda… – Proceedings of the …, 2015 – dl.acm.org … the video clips. These are based on the approach from [27]. It uses 1582 fea- tures extracted with the open-source Emotion and Affect Recognition (openEAR) [12] toolkit which uses openSMILE [11] as backend. The toolkit encapsulates … Cited by 15 Related articles All 4 versions

On the development of a service robot for social interaction with the elderly D Portugal, P Trindade, E Christodoulou… – … for Active and …, 2015 – ieeexplore.ieee.org … Real-time emotion and af- fect recognition is possible using the Open-Source Emotion and Affect Recognition (openEAR) framework [23]. … [23] Eyben, F., Wöllmer, M., Schuller, B.: ‘openEAR – Introduc- ing the Munich Open-Source Emotion and Affect Recognition Toolkit’. Proc. … Related articles All 4 versions

Within and cross-corpus speech emotion recognition using latent topic model-based features M Shah, C Chakrabarti, A Spanias – … Journal on Audio, Speech, and Music …, 2015 – Springer … Energy and MFCCs are extracted using the HTK Toolkit [44], while the F0 estimates are extracted using the OpenEar Affect Recognition Toolkit [16]. … These features were extracted using the openEAR Affect Recognition toolkit [16]. … Cited by 4 Related articles All 7 versions

Speech-based Recommender Systems P Grasch – grasch.net Page 1. Peter Grasch Speech-based Recommender Systems Master’s Thesis Graz University of Technology Institute for Software Technology Supervisor: Univ.-Prof. Dipl-Ing. Dr.techn. Alexander Felfernig Graz, April 2015 Page 2. Page 3. Statutory Declaration … Related articles

open-Source Media Interpretation by Large feature-space Extraction F Eyben, F Weninger, M Wöllmer, B Schuller – academia.edu … The first publicly available version of openSMILE was contained in the first Emotion and Affect recognition toolkit openEAR as the feature extraction core. openEAR was introduced at the Affective Computing and Intelligent Interaction (ACII) conference in 2009. … Related articles All 2 versions

Learning speech emotion features by joint disentangling-discrimination W Xue, Z Huang, X Luo, Q Mao – Affective Computing and …, 2015 – ieeexplore.ieee.org … Thus, the feature vector per chunk contains 16 × 2 × 12 = 384 attributes. To ensure reproducibility, the open source toolkit openEAR [30] is utilized to extract 384 attributes. The performance of the raw acoustic features is used as a baseline. … Related articles All 3 versions

SocialRobot: An interactive mobile robot for elderly home care D Portugal, P Alvito, J Dias, G Samaras… – 2015 IEEE/SICE …, 2015 – ieeexplore.ieee.org … Based on the available face recognition ROS package. • emotion recognition (perception): real-time emotion and affect recognition through speech. Based on the openEAR software. • word spotting (perception): recognition of a limited set of simple words through speech. … Cited by 1 Related articles

Conflict Cues in Call Center Interactions M Koutsombogera, D Galanis, MT Riviello… – Conflict and Multimodal …, 2015 – Springer … Schuller et~al. 2010). The speech features are computed using openSMILE, the audio feature extraction front-end component of the open-source Emotion and Affect Recognition (openEAR) toolkit (Eyben et~al. 2010). A total … Related articles All 4 versions

SenticNet E Cambria, A Hussain – Sentic Computing, 2015 – Springer … 417–422 (2006). 122. Eyben, F., Wollmer, M., Schuller, B.: OpenEAR—introducing the munich open-source emotion and affect recognition toolkit. In: 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (ACII 2009), Amsterdam, pp. … Cited by 2 Related articles

Minimal cross-correlation criterion for speech emotion multi-level feature selection T Liogiene, G Tamulevicius – Electrical, Electronic and …, 2015 – ieeexplore.ieee.org … 1517–1520, 2005. [19] F. Eyben, M. Wollmer and B. Schuller, “OpenEAR – Introducing the Munich open-source emotion and affect recognition toolkit,“ 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, pp. 1–6, September 2009. Fig. … Cited by 1 Related articles

A system for recognizing human emotions based on speech analysis and facial feature extraction: applications to Human-Robot Interaction M Rabiei – 2015 – dspace-uniud.cineca.it … 51 2-6- Bezier curve 51 2-6-2 Support Vector Machine (SVM) 57 2-7 Open-source toolkit for sound recognition 58 2-7-1 OpenEAR toolkit 58 2-7-2 PRATT Software 58 2-8 Programming in C/C++ and open CV 60 3- Implementation the emotion recognition system 62 … Related articles All 2 versions

Sentic Applications E Cambria, A Hussain – Sentic Computing, 2015 – Springer … 417–422 (2006). 122. Eyben, F., Wollmer, M., Schuller, B.: OpenEAR—introducing the munich open-source emotion and affect recognition toolkit. In: 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (ACII 2009), Amsterdam, pp. … Related articles

Exploring dataset similarities using PCA-based feature selection I Siegert, R Böck, A Wendemuth… – … Interaction (ACII), 2015 …, 2015 – ieeexplore.ieee.org … features. According to [7], we also decided to start with a larger feature collection. Since we intended reproducibility, we selected the feature set proposed by Eyben et al. in the context of the openEAR project [21]. The feature … Cited by 1 Related articles All 8 versions

Sentic Patterns E Cambria, A Hussain – Sentic Computing, 2015 – Springer … 417–422 (2006). 122. Eyben, F., Wollmer, M., Schuller, B.: OpenEAR—introducing the munich open-source emotion and affect recognition toolkit. In: 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (ACII 2009), Amsterdam, pp. … Related articles

Talking Books: the development of an interactive, educational, digital application E Drescher – 2015 – aut.researchgateway.ac.nz … connection,!cutting!out!the!processing!time!and!resulting!in!a!faster!response.!However,! the!external!server!is!much!more!powerful!than!a!mobile!device,!so!the!mobile!device! must!compensate!with!smaller!vocabularies!to!get!accurate!recognition.!OpenEar’s! … All 2 versions

Harmony search for feature selection in speech emotion recognition Y Tao, K Wang, J Yang, N An… – Affective Computing and …, 2015 – ieeexplore.ieee.org … [11] F. Eyben, M. Wollmer,B. Schuller, “openEAR – Introducing the Munich Open-Source Emotion and Affect Recognition Toolkit”, in Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. 3rd International Conference, 2009, © IEEE.doi: … Cited by 1 Related articles All 3 versions

Efficient speech emotion recognition using binary support vector machines & multiclass SVM NR Kanth, S Saraswathi – 2015 IEEE International Conference …, 2015 – ieeexplore.ieee.org … Speech Comm. and Technology, pp. 1517-1520, 2005. [20] F. Eyben, M. Wo¨llmer, and B. Schuller, “openEAR—Introducing the Munich Open-Source Emotion and Affect Recognition Toolkit,” Proc. Third Int’l Conf. Affective Computing and Intelligent Interaction, Sept. 2009. … Related articles

Cross-corpus analysis for acoustic recognition of negative interactions I Lefter, HT Nefs, CM Jonker… – … Interaction (ACII), 2015 …, 2015 – ieeexplore.ieee.org … promising feature types and functionals covering prosodic, spectral, and voice quality fea- tures [33]. The features are extracted using OpenEAR [37]. The 16 low-level descriptors chosen are: zero-crossing-rate (ZCR) from the … Cited by 1 Related articles All 5 versions

Human Affect Recognition: Audio-Based Methods B Schuller, F Weninger – Wiley Encyclopedia of Electrical and …, 2015 – Wiley Online Library … frequency. The open-source Emotion and Affect Recognition Toolkit (openEAR) combines openSMILE with pretrained models for emotion recognition (54). A similar initiative, yet not open source, is the EmoVoice toolkit (55). … Cited by 1 Related articles

Wearable Tools for Affective Remote Collaboration K Gupta – 2015 – ir.canterbury.ac.nz Page 1. Wearable Tools for Affective Remote Collaboration Kunal Gupta University of Canterbury Page 2. Wearable Tools for Affective Remote Collaboration by Kunal Gupta Dissertation submitted in partial fulfillment for the degree of Master of Human Interface Technology at the … Related articles All 2 versions

Low-Order Multi-Level Features for Speech Emotion Recognition G Tamulevicius, T Liogiene – Baltic Journal of Modern …, 2015 – search.proquest.com … 1970-1973. Eyben, F., Wollmer, M., Schuller, B. (2009, September 10-12). openEAR – Introducing the Munich Open-Source Emotion and Affect Recognition Toolkit. Affective Computing and Intelligent Interaction and Workshops, pp. 1-6. … Cited by 2 Related articles All 3 versions

An API for smart objects and multimodal user interfaces for the smart home and office CR Rubio – 2015 – dspace.mit.edu … Example software libraries include the HTML5 Web Speech API, CMU Sphinx, Microsoft Speech Platform SDK, WAMI Toolkit, OpenEAR: Munich Open-Source Emotion and Affect Recognition Toolkit, and OpenEars. The Microsoft … Related articles

Ensemble methods for continuous affect recognition: multi-modality, temporality, and challenges M Kächele, P Thiam, G Palm, F Schwenker… – Proceedings of the 5th …, 2015 – dl.acm.org Page 1. Ensemble Methods for Continuous Affect Recognition: Multi-modality, Temporality, and Challenges Markus Kächele Institute of Neural Information Processing Ulm University, Germany markus.kaechele@uni- ulm.de … Cited by 5 Related articles All 2 versions

Can deep learning revolutionize mobile sensing? ND Lane, P Georgiev – Proceedings of the 16th International Workshop …, 2015 – dl.acm.org … [13] L. Deng and D. Yu. Deep Learning: Methods and Applications. Now Publishers Inc. Jan. 2014. [14] F. Eyben, M. Wöllmer, and B. Schuller. OpenEar – Introducing the Munich Open-source Emotion and Affect Recognition Toolkit. In In ACII. [15] K. Han, D. Yu, and I. Tashev. … Cited by 24 Related articles All 10 versions

Variational Infinite Hidden Conditional Random Fields K Bousmalis, S Zafeiriou, LP Morency… – IEEE transactions on …, 2015 – ieeexplore.ieee.org Page 1. Variational Infinite Hidden Conditional Random Fields Konstantinos Bousmalis, Student Member, IEEE, Stefanos Zafeiriou, Member, IEEE, Louis-Philippe Morency, Member, IEEE, Maja Pantic, Fellow, IEEE, and Zoubin Ghahramani, Member, IEEE … Cited by 1 Related articles All 10 versions

Variational Infinite Hidden Conditional Random Fields HCR Fields – ibug.doc.ic.ac.uk Page 1. 0162-8828 (c) 2013 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This … Related articles All 4 versions

Context Recognition Methods using Audio Signals for Human-Machine Interaction M Shah – 2015 – repository.asu.edu Page 1. Context Recognition Methods using Audio Signals for Human-Machine Interaction by Mohit Shah A Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Approved April 2015 by the Graduate Supervisory Committee: … Related articles All 2 versions

Reward-based learning for virtual neurorobotics through emotional speech processing LCJ Bray, GB Ferneyhough, ER Barker… – Value and Reward …, 2015 – books.google.com … Pattern Recognit. 44, 572–587. Eyben, F., Wollmer, M., and Schuller, B.(2009).“OpenEAR: introducing the munich open-source emotion and affect recognition toolkit,” in Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. … Cited by 3 Related articles All 16 versions

Situation-and user-adaptive dialogue management G Bertrand – 2015 – oparu.uni-ulm.de Page 1. Situation- and User-Adaptive Dialogue Management Gregor Bertrand geboren in Ravensburg Institute of Communications Engineering: Dialogue Systems University of Ulm Dissertation zur Erlangung des Doktorgrades Dr.rer.nat. … Related articles All 2 versions

Speaker height estimation from speech: Fusing spectral regression and statistical acoustic models JHL Hansen, K Williams, H Bo?il – The Journal of the Acoustical …, 2015 – scitation.aip.org ISSN: 0001-4966; DOI: http://dx.doi.org/10.1121/1.4927554; Volume 138, Issue 2, pages 1052-1067; © 2015 Author(s … Cited by 1 Related articles All 10 versions

Emotion transfer protocol V Wikström – 2015 – aaltodoc.aalto.fi Page 1. Emotion Transfer Protocol Experiments in Emotion Transmission Valtteri Wikström 2015 Master’s Thesis MA in New Media Department of Media School of Arts, Design and Architecture Aalto University Page 2. Page 3. Abstract … Related articles

Cross Platform Development Possibilities and drawbacks of the Xamarin platform DIFHN Haberl – 2015 – marshallplan.at Page 1. FH JOANNEUM University of Applied Sciences Cross Platform Development Possibilities and drawbacks of the Xamarin platform DI (FH) Norbert Haberl 24. August 2015 Page 2. Table of Content DI (FH) Norbert Haberl 2 | 51 Table of Content … Related articles

Medium term speaker state detection by perceptually masked spectral features C Sezgin, B Gunsel, J Krajewski – Speech Communication, 2015 – Elsevier We propose a method based on perceptual prosodic features for medium term speaker state classification, particularly sleepiness detection. Unlike existing metho. Related articles

The use of ensemble techniques in multiclass speech emotion recognition to improve both accuracy and confidence in classifications A Murphy – 2015 – aran.library.nuigalway.ie Page 1. Provided by the author(s) and NUI Galway in accordance with publisher policies. Please cite the published version when available. Downloaded 2016-05-20T07:24:25Z Some rights reserved. For more information, please see the item record link above. … Related articles All 2 versions

Deriving Conversational Social Contexts from Audio-Data HJ Schäfer – 2015 – mediatum.ub.tum.de Page 1. FAKULTÄT FÜR INFORMATIK TECHNISCHE UNIVERSITÄT MÜNCHEN Master’s Thesis Deriving Conversational Social Contexts from Audio-Data Hanna Jasmin Schäfer Page 2. FAKULTÄT FÜR INFORMATIK TECHNISCHE UNIVERSITÄT MÜNCHEN Master’s Thesis …