openEAR 2013


Resources:

Wikipedia:

See also:

Affective Computing & Dialog Systems 2011


Sleepiness detection from speech by perceptual features B Gunsel, C Sezgin, J Krajewski – Acoustics, Speech and …, 2013 – ieeexplore.ieee.org … The highest class wised averaged rate achieved is reported as over 80%. The openEAR emotional search engine is adopted to the sleepiness detection problem in most of the recent studies. openEAR is a generic emotion … Related articles All 2 versions

Affect Intensity Estimation U sing Multiple Modalities AS Patwardhan, GM Knapp – 2013 – aaai.org … In case of features from speech modality, this research used openEAR toolkit (Fyben, Wollmer, and Schuller 2009). … Eyben, F., Wöllmer, M., Schuller, B., 2009. openEAR – Introducing the Munich Open-Source Emotion and Affect Recognition Toolkit. … Related articles All 2 versions

Acoustic feature selection utilizing multiple kernel learning for classification of children with autism spectrum and typically developing children Y Kakihara, T Takiguchi, Y Ariki… – … Integration (SII), 2013 …, 2013 – ieeexplore.ieee.org … In this paper, we used an open-source openEAR [15], [16] to extract features, such signal energy, FFT-spectrum, mel-spectrum, MFCC, line spectral fre- quencies (line spectral pairs), pitch, voice quality (Harmonics-To-Noise Ratio), LPC coefficients, Perpet- ual Linear Predictive … Related articles All 3 versions

Excitation source and low level descriptor features fusion for emotion recognition using SVM and ANN A Al-Talabani, H Sellahewa… – Computer Science and …, 2013 – ieeexplore.ieee.org … Accordingly, we introduced a new set of Excitation Source feature (referred to by ES in this paper) obtained from the LP- residual signal as complementary information to the set of 6552 LLD spectral and prosodic features extracted using OpenEAR software (referred to by OP in … Cited by 1 Related articles

The social signal interpretation (SSI) framework: multimodal signal processing and recognition in real-time J Wagner, F Lingenfelser, T Baur, I Damian… – Proceedings of the 21st …, 2013 – dl.acm.org … training corpora. A toolkit developed for real-time affect recognition from speech is the openEAR toolkit with its feature extracting backend openSMILE [3]. It is, however, developed with a strong focus on audio processing. Social … Cited by 15 Related articles

Emotion recognition with boosted tree classifiers M Day – Proceedings of the 15th ACM on International …, 2013 – dl.acm.org … It is also the feature extraction backend used by OpenEAR [7] (the Munich Open-Source Emotion and Affect Recognition Toolkit) and it includes several feature set definitions intended specifically for emotion recognition. It is fast and supports various platforms. … Related articles

Sparse autoencoder-based feature transfer learning for speech emotion recognition J Deng, Z Zhang, E Marchi… – Affective Computing and …, 2013 – ieeexplore.ieee.org … Thus, the total feature vector per chunk contains 16 × 2 × 12 = 384 attributes. To ensure reproducibility as well, the open source toolkit openEAR toolkit [21] was used to extract the feature set with the pre-defined openEAR configuration for the 2009 challenge. IV. … Cited by 6 Related articles All 7 versions

Prediction of strategy and outcome as negotiation unfolds by using basic verbal and behavioral features. E Nouri, S Park, S Scherer, J Gratch, P Carnevale… – …, 2013 – researchgate.net … The following acoustic features were extracted from the audio recordings of participants, using OPENEAR [16] • The mean and standard deviation of the following acoustic features calculated at the end of each quarter of the negotiation task: peak slope, Normalized amplitude … Cited by 3 Related articles All 5 versions

Multimodal sentiment analysis of Spanish online videos V Rosas, R Mihalcea, LP Morency – IEEE Intelligent Systems, 2013 – computer.org … The utterance segmentation was based on long pauses that could easily be detected using tools such as Praat and OpenEAR [44]. … We used the open source software OpenEAR [44] to automatically compute the pitch and voice intensity. … Cited by 8 Related articles All 8 versions

Children’s Emotion Recognition from Spontaneous Speech Using a Reduced Set of Acoustic and Linguistic Features S Planet, I Iriondo – Cognitive Computation, 2013 – Springer … elements. To perform the acoustic parameterisation, we used the feature extraction software openSMILE included in the Munich open-source Emotion and Affect Recognition Toolkit (openEAR) [9]. Linguistic Parameterisation … Cited by 1 Related articles All 6 versions

Speaker-Independent Speech Emotion Recognition Based on Two-Layer Multiple Kernel Learning JIN Yun, S Peng, W Zheng, Z Li… – … on Information and …, 2013 – search.ieice.org … by 20 judgers. This selection set is used in the paper. With the openEAR toolkit[11], 988 features are ex- tracted as 19 functionals of 26 acoustic low-level descriptors (LLD) and corresponding first order delta. The 26 Low-level … Cited by 1 Related articles All 3 versions

Classification of emotional speech units in call centre interactions D Galanis, S Karabetsos… – … ), 2013 IEEE 4th …, 2013 – ieeexplore.ieee.org … of speech features proposed for the Interspeech 2010 paralinguistic challenge was considered [3]. The speech features are computed using openSMILE, the audio feature extraction front-end component of the open-source Emotion and Affect Recognition (openEAR) toolkit [12]. … Related articles

On acoustic emotion recognition: compensating for covariate shift A Hassan, R Damper, M Niranjan – Audio, Speech, and …, 2013 – ieeexplore.ieee.org Page 1. Copyright (c) 2013 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. … Cited by 6 Related articles All 3 versions

Utterance-Level Multimodal Sentiment Analysis. V Pérez-Rosas, R Mihalcea, LP Morency – ACL (1), 2013 – aclweb.org … 4.1.2 Acoustic Features Acoustic features are automatically extracted from the speech signal of each utterance. We used the open source software OpenEAR (Schuller, 2009) to automatically compute a set of acoustic features. … Cited by 2 Related articles All 5 versions

AVEC 2014–3D Dimensional Affect and Depression Recognition Challenge M Valstar, B Schuller, K Smith, T Almaev, F Eyben… – 2013 – cs.nott.ac.uk … respect to the INTERSPEECH 2009 Emotion Challenge (384 features) [23] and INTER- SPEECH 2010 Paralinguistic Challenge (1 582 features) [24] is given to the participants, again using the freely available open-source Emotion and Affect Recognition (openEAR) [8] toolkit’s … Related articles

Feature space dimension reduction in speech emotion recognition using support vector machine BC Chiou, CP Chen – Signal and Information Processing …, 2013 – ieeexplore.ieee.org … approximation error 5 A. Baseline The baseline feature set is used in [9]. OpenEAR toolkit is used to extract 6552 features consisting of 39 functionals of 56 acoustic low-level descriptors (LLD) along with the first and second order delta regression coefficients. … Related articles All 2 versions

Feature Based Method to Detect Human Facial Expressions G Singh¹, V Wasson – ijcsce.org … IEEE Signal Processing Magazine, 18 (1)(1):32 – 80, January 2001. [9] F. Eyben, M. Wllmer, and B. Schuller. openear – introducing the munich open-source emotion and affect recognition toolkit. In Proc. of ACII 2009, Amsterdam, pages 576–581., 2009. … Related articles All 2 versions

On the development of an automatic voice pleasantness classification and intensity estimation system L Pinto-Coelho, D Braga, M Sales-Dias… – Computer Speech & …, 2013 – Elsevier … In the latest years, with the appearance of several feature extraction tools, such as OpenEAR (Eyben et al., 2009), and with the improvement of feature selection algorithms, we have assisted to an increase on the number of features allowing to explore additional dimensions. … Cited by 5 Related articles All 6 versions

Emotion recognition in the wild challenge 2013 A Dhall, R Goecke, J Joshi, M Wagner… – Proceedings of the 15th …, 2013 – dl.acm.org … Paralinguistic challenge (1582 features) [17] are used. The features are extracted using the open-source Emotion and Affect Recognition (openEAR) [18] toolkit backend openS- MILE [19]. The feature set consists of 34 energy … Cited by 14 Related articles All 6 versions

[BOOK] Computational paralinguistics: emotion, affect and personality in speech and language processing B Schuller, A Batliner – 2013 – books.google.com … List of Abbreviations xix ITU KL L1 L2 LDA LDC LIWC LLD LM LOO LP LPC LPCC LSF LSP LSTM LSTM-RNN LVCSR MAE MAPE MFB MFCC MIML ML MLE MLP MSE NCSC NHD NHR NHT NL NMF NN NP NPV OCEAN OOV openEAR openSMILE PC PCA PCM PDA PDF … Cited by 4 Related articles All 3 versions

Infinite hidden conditional random fields for human behavior analysis K Bousmalis, S Zafeiriou, L Morency… – Neural Networks and …, 2013 – ieeexplore.ieee.org Page 1. 170 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 1, JANUARY 2013 Infinite Hidden Conditional Random Fields for Human Behavior Analysis Konstantinos Bousmalis … Cited by 8 Related articles All 12 versions

Audio Features B Schuller – Intelligent Audio Analysis, 2013 – Springer … Morgan Kaufmann, San Francisco (2005); Hoare, CAR: Quicksort. Comput. J. 5(1), 10–16 (1962) CrossRef; Eyben, F., Wöllmer, M., Schuller, B.: Openear – introducing the munich open-source emotion and affect recognition toolkit. … Related articles

Integration of Temporal Contextual Information for Robust Acoustic Recognition of Bird Species from Real-Field Data I Mporas, T Ganchev, O Kocsis, N Fakotakis… – … Journal of Intelligent …, 2013 – mecs-press.org … [19] F. Eyben, M. Wollmer, and B. Schuller, ?OpenEAR – intro-ducing the Munich open-source emotion and affect recognition toolkit,? In Proc. of the 4th International HUMAINE Association Conference on Affective Computing and Intelligent Interaction (ACII 2009). … Cited by 1 Related articles All 4 versions

Utilizing Bimodal Emotion Recognition for Adaptive Artificial Intelligence A Murphy, S Redfern – ijesit.com … ACM Multimedia (MM), ACM, Firenze, Italy. [16] Eyben, F.; Wollmer, M.; Schuller, B., “OpenEAR — Introducing the munich open-source emotion and affect recognition toolkit,” Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. … Related articles All 2 versions

A two-layer model for music pleasure regression X Wang, Y Wu, X Chen, D Yang – Multimedia and Expo …, 2013 – ieeexplore.ieee.org … 2008, pp. l369-1372. [7] F. Eyben, M. Wollmer, and B. Schuller, “OpenEAR Introducing the munich open-source emotion and affect recognition toolkit, ” in Affective Computing and Intel ligent Interaction and Workshops, 2009. ACII 2009. … Related articles

Speaker state recognition using an HMM-based feature extraction method R Gajšek, F Miheli?, S Dobrišek – Computer Speech & Language, 2013 – Elsevier … frame. In the majority of the tests, we used the MFCCs produced by the openSMILE feature extractor (Eyben et al., 2010), which is a part of the open-source Emotion and Affect Recognition (openEAR) toolkit (Eyben et al., 2009). … Cited by 9 Related articles All 4 versions

Affective classification of generic audio clips using regression models. N Malandrakis, S Sundaram, A Potamianos – INTERSPEECH, 2013 – sail.usc.edu … lexicon creation,” in Proc. Interspeech, 2011, pp. 2977–2980. [15] F. Eyben, M. Wollmer, and B. Schuller, “Openear – intro- ducing the munich open-source emotion and affect recog- nition toolkit,” in Proc. ACII, 2009, pp. 1–6. [16] O … Related articles All 4 versions

Automatic classification of palatal and pharyngeal wall shape categories from speech acoustics and inverted articulatory signals M Li123, A Lammert, J Kim, PK Ghosh, S Narayanan – 2013 – jie.sysu.edu.cn … 2, no. 3, p. 27, 2011. [31] F. Eyben, M. Wollmer, and B. Schuller, “Openear intro- ducing the munich open-source emotion and affect recog- nition toolkit,” in Affective Computing and Intelligent In- teraction and Workshops, 2009. ACII 2009. 3rd Interna- tional Conference on. … Cited by 1 Related articles All 4 versions

[BOOK] Intelligent audio analysis BW Schuller – 2013 – Springer Page 1. Signals and Communication Technology Intelligent Audio Analysis Björn W. Schuller Page 2. Signals and Communication Technology For further volumes: http://www.springer.com/ series/4748 Page 3. Björn W. Schuller Intelligent Audio Analysis 123 Page 4. … Related articles All 5 versions

Speech emotion recognition approaches in human computer interaction S Ramakrishnan, IMM El Emary – Telecommunication Systems, 2013 – Springer … for the bene- fit of readers. Emotional speech software tools: HMM Toolkit (HTK); openEAR; xwavesp package; FEELtrace; Praat; ANOVA; openSMILE: ESEDA feature extraction module; ESMER- ALDA; EmoVoice; EmoRate. … Cited by 7 Related articles All 6 versions

Identifying salient sub-utterance emotion dynamics using flexible units and estimates of affective flow EM Provost – … , Speech and Signal Processing (ICASSP), 2013 …, 2013 – ieeexplore.ieee.org … 5, no. 9/10, pp. 341–345, 2002. [17] F. Eyben, M. Wöllmer, and B. Schuller, “openEAR – intro- ducing the munich open-source emotion and affect recognition toolkit,” in Affective Computing and Intelligent Interaction (ACII), Amsterdam, The Netherlands, Sept. 2009, pp. 25–29. … Cited by 1 Related articles All 3 versions

AVEC 2013: the continuous audio/visual emotion and depression recognition challenge M Valstar, B Schuller, K Smith, F Eyben… – Proceedings of the 3rd …, 2013 – dl.acm.org … IEEE, 2013. to appear. [7] F. Eyben, M. Wöllmer, and B. Schuller. openEAR – Introducing the Munich Open-Source Emotion and Affect Recognition Toolkit. In Proc. ACII, pages 576–581, Amsterdam, The Netherlands, 2009. [8] F. Eyben, M. Wöllmer, and B. Schuller. … Cited by 13 Related articles All 6 versions

A Multimodal Emotion Detection System during Human–Robot Interaction F Alonso-Martín, M Malfaz, J Sequeira, JF Gorostiza… – Sensors, 2013 – mdpi.com … 4.1. Voice Features Extraction. There are several open sound feature extraction systems. In this work, we have experimented with “OpenSMILE” [53] and with another version that includes the emotion classifier, “OpenEAR” [54]. … Cited by 1 Related articles All 7 versions

Towards Efficient Multi-Modal Emotion Recognition S Dobrišek, R Gajšek, F Miheli?, N Paveši?… – Int J Adv Robotic …, 2013 – researchgate.net Page 1. International Journal of Advanced Robotic Systems Towards Efficient Multi-Modal Emotion Recognition Regular Paper Simon Dobrišek1, Rok Gajšek1, France Miheli?1, Nikola Paveši?1 and Vitomir Štruc1,* 1 Faculty … Cited by 2 Related articles All 5 versions

Using emotional noise to uncloud audio-visual emotion perceptual evaluation EM Provost, I Zhu, S Narayanan – Multimedia and Expo (ICME), …, 2013 – ieeexplore.ieee.org Page 1. USING EMOTIONAL NOISE TO UNCLOUD AUDIO-VISUAL EMOTION PERCEPTUAL EVALUATION Emily Mower Provost?, Irene Zhu?, and Shrikanth Narayanan‡ ?Electrical Engineering and Computer Science, University … Cited by 1 Related articles All 3 versions

Prediction of Visual Backchannels in the Absence of Visual Context Using Mutual Influence D Ozkan, LP Morency – Intelligent Virtual Agents, 2013 – Springer Page 1. Prediction of Visual Backchannels in the Absence of Visual Context Using Mutual Influence Derya Ozkan and Louis-Philippe Morency University of Southern Caligornia Institute for Creative Technologies 10215 Waterfront … Related articles All 3 versions

Applications in Intelligent Speech Analysis B Schuller – Intelligent Audio Analysis, 2013 – Springer … human-computer interaction. IEEE Signal Process. Mag. 18(1), 32–80 (2001) CrossRef; Eyben, F., Wöllmer, M., Schuller, B.: Openear—introducing the munich open-source emotion and affect recognition toolkit. In: Proceedings 3rd … Related articles

[BOOK] Common Vestibular Disorders-II-ECAB NN Mathur – 2013 – books.google.com Page 1. Page 2. Common Vestibular Disorders – II First Edition NNMathur Sridhar Simhadri Anirban Biswas Sabarigirish K. Subirendra Kumar Nitin M. Nagarkar Anu N. Nagarkar Elsevier Page 3. Table of Contents Cover image … Related articles All 2 versions

Reward-based learning for virtual neurorobotics through emotional speech processing LCJ Bray, GB Ferneyhough, ER Barker… – Frontiers in …, 2013 – ncbi.nlm.nih.gov Warning: The NCBI web site requires JavaScript to function. more… … Cited by 1 Related articles All 12 versions

Automatic speaker age and gender recognition using acoustic and prosodic level information fusion M Li, KJ Han, S Narayanan – Computer Speech & Language, 2013 – Elsevier The paper presents a novel automatic speaker age and gender identification approach which combines seven different methods at both acoustic and prosodic levels. Cited by 29 Related articles All 5 versions

Paralinguistics in speech and language—state-of-the-art and the challenge B Schuller, S Steidl, A Batliner, F Burkhardt… – Computer Speech & …, 2013 – Elsevier Paralinguistic analysis is increasingly turning into a mainstream topic in speech and language processing. This article aims to provide a broad overview of the. Cited by 59 Related articles All 6 versions

Induction, recording and recognition of natural emotions from facial expressions and speech prosody K Karpouzis, G Caridakis, R Cowie… – Journal on Multimodal …, 2013 – Springer Page 1. J Multimodal User Interfaces (2013) 7:195–206 DOI 10.1007/s12193-013- 0122-3 ORIGINAL PAPER Induction, recording and recognition of natural emotions from facial expressions and speech prosody Kostas Karpouzis … Cited by 1 Related articles All 8 versions

Emotion recognition method based on normalization of prosodic features M Suzuki, S Nakagawa, K Kita – Signal and Information …, 2013 – ieeexplore.ieee.org … 312–315. [8] F. Eyben, M. Wollmer, and B. Schuller, “openEAR — Introducing the Munich open-source emotion and affect recognition toolkit,” in Proc. 4th International HUMAINE Association Conference on Affective Computing and Intelligent Interaction 2009 (ACII 2009), … Related articles All 2 versions

Latent mixture of discriminative experts D Ozkan, LP Morency – Multimedia, IEEE Transactions on, 2013 – ieeexplore.ieee.org Page 1. Copyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. … Cited by 4 Related articles All 4 versions

Multimodal Automatic User Disposition Recognition in Human-Machine Interaction DIR Böck – iikt.ovgu.de Page 1. INSTITUT FÜR INFORMATIONS- UND KOMMUNIKATIONSTECHNIK (IIKT) Multimodal Automatic User Disposition Recognition in Human-Machine Interaction DISSERTATION zur Erlangung des akademischen Grades Doktoringenieur (Dr.-Ing.) von Dipl.-Inf. … Related articles

[BOOK] Listening heads IA Kok – 2013 – doc.utwente.nl Page 1. LISTENINGHEADS. IWAN DE KOK A Page 2. LISTENING HEADS DISSERTATION to obtain the degree of doctor at the University of Twente, on the authority of the rector magnificus, Prof. dr. H. Brinksma on account of … Related articles All 5 versions