openEAR 2014


Notes:

  • Affect recognition
  • Affective agent
  • Emotion detection
  • Emotion detection system
  • Emotion recognition

Resources:

See also:

Emotional Agents | Emotional Agents 2012openEAR 2013


25 Research and Development Tools in Affective Computing MS Hussain, SK D’Mello, RA Calvo – The Oxford Handbook of …, 2014 – books.google.com … Then there CHAPTER 349 Page 369. are application-specific tools (eg, Gtrace, OpenEar) that are built for specific purposes. … Respondents had the option of responding to more than one tool (OpenEar, AuBT, In-house, etc.) within a category (eg, signal processing and analysis). … Related articles

Multisensory Emotion Recognition With Speech And Facial Expression JP Roeland – 2014 – aquila.usm.edu … return the singular emotion felt. Key Words: emotion recognition, affective computing, openSMILE, openEAR, Weka, Human Emotion Detection from Image Page 6. v … 17 Subsection 3.2.4: Ramifications of Porting 17 Section 3.3: openSMILE/openEAR and Weka 19 … Related articles

Speech Emotion Recognition Using Deep Neural Network and Extreme Learning Machine K Han, D Yu, I Tashev – Fifteenth Annual Conference of the …, 2014 – researchgate.net … Another approach is a state-of-the-art toolkit for emotion recognition: OpenEAR [11]. … 7512–7516. [11] F. Eyben, M. Wollmer, and B. Schuller, “OpenEAR – introducing the Munich open-source emotion and affect recognition toolkit,” in Proceedings of ACII 2009. IEEE, 2009, pp. … Cited by 2 Related articles All 10 versions

A Broadcast News Corpus for Evaluation and Tuning of German LVCSR Systems F Weninger, B Schuller, F Eyben, M Wöllmer… – arXiv preprint arXiv: …, 2014 – arxiv.org … A baseline set contained Mel Frequency Cepstral Coefficients (MFCC) 1–12 and signal log-energy, and their first (?) and sec- ond order regression coefficients (??), which were extracted using the openEAR feature extractor [10], and are identical to the features extracted by the … Related articles All 4 versions

Speech Emotion Recognition with Cross-lingual Databases BC Chiou, CP Chen – Fifteenth Annual Conference of the …, 2014 – mazsola.iit.uni-miskolc.hu … 4. System and Evaluation 4.1. Baseline System The baseline system is the same as the one reported in [12]. The OpenEAR toolkit is used to be our feature extractor [15]. Speech input is processed using Hamming windows of 25 ms with a frame shift of 10 ms. … Related articles All 4 versions

A Multimodal Dataset for Deception Detection V Pérez-Rosas, R Mihalcea, A Narvaez, M Burzo – lrec-conf.org … We use OpenEar(Schuller, 2009), an open source software for acoustic feature extraction. Overall, we obtained a set of 28 acoustic features. … Schuller, FEMWB (2009). Openear – introducing the Munich open-source emotion and affect recognition toolkit. In ACII. … Related articles

Introducing shared-hidden-layer autoencoders for transfer learning and their application in acoustic emotion recognition J Deng, R Xia, Z Zhang, Y Liu… – Acoustics, Speech and …, 2014 – ieeexplore.ieee.org … To ensure reproducibility as well, the open source toolkit openEAR toolkit [27] was used with the pre-defined EC configuration. 4.3. … [27] F. Eyben, M. Wollmer, and B. Schuller, “openEAR — Intro- ducing the Munich open-source emotion and affect recognition toolkit,” in Proc. … Cited by 1 Related articles All 4 versions

Emotion Recognition in the Wild with Feature Fusion and Multiple Kernel Learning JK Chen, Z Chen, Z Chi, H Fu – … of the 16th International Conference on …, 2014 – dl.acm.org … provided. The organizers of the emotion recognition challenge have supplied the acoustic features. The features are extracted using the open-source Emotion Affect Recognition (openEAR) [29] toolkit backend OpenSMILE[30]. The … Related articles

Intelligent Systems’ Holistic Evolving Analysis of Real-Life Universal Speaker Characteristics B Schuller, Y Zhang, F Eyben, F Weninger – mmk.e-technik.tu-muenchen.de … Research in this field has delivered highly promising results and tools for the community including the first widely used open-source af- fect analysis toolkit openEAR (Eyben et al., 2009) and its large-scale acoustic feature extractor openSMILE (Eyben et al., 2013) which both … Related articles All 2 versions

A feature selection and feature fusion combination method for speaker-independent speech emotion recognition Y Jin, P Song, W Zheng, L Zhao – Acoustics, Speech and …, 2014 – ieeexplore.ieee.org … There are initially about 900 utterances in it. After a listening test by 20 judgers, only 494 sentences are kept. 3.2. Feature extraction With the openEAR toolkit[14], the features are extracted as 19 functionals of 26 acoustic low-level descriptors(LLD) and … Cited by 1 Related articles All 2 versions

Augmented Video Conferencing By Multimodal Emotion Recognition S Soleimani, D Lalanne, F Ringeval, A Sonderegger – 2014 – diuf.unifr.ch … 36 22 openEAR data visualization . . . . . … Figure 4: General procedure of speech emotion recognition [19] openEAR is another toolkit for affection recognition from audio and speech which is used in this project as well. … Related articles

Avec 2014: 3d dimensional affect and depression recognition challenge M Valstar, B Schuller, K Smith, T Almaev… – Proceedings of the 4th …, 2014 – dl.acm.org … respect to the INTERSPEECH 2009 Emotion Challenge (384 features) [24] and INTER- SPEECH 2010 Paralinguistic Challenge (1 582 features) [25] is given to the participants, again using the freely available open-source Emotion and Affect Recognition (openEAR) [8] toolkit’s … Cited by 18 Related articles All 3 versions

Towards an intelligent framework for multimodal affective data analysis (Forthcoming/Available Online) S Poria, E Cambria, A Hussain, GB Huang – 2014 – dspace.stir.ac.uk Page 1. Accepted Manuscript Towards an intelligent framework for multimodal affective data analysis Soujanya Poria, Erik Cambria, Amir Hussain, Guang-Bin Huang PII: S0893-6080(14) 00234-2 DOI: http://dx.doi.org/10.1016/j.neunet.2014.10.005 Reference: NN 3407 … Related articles All 4 versions

[BOOK] A Biblical Theology of the Holy Spirit K Warrington, TJ Burke – 2014 – books.google.com … At the same time, and inlight of our canonical approach, each contributor will alsokeep an openear forthe sounds where these individual voices blend together in harmony as the theme of the Spirit is traced throughScripture. … Related articles

A Hybrid Distance-Based Method and Support Vector Machines for Emotional Speech Detection V Kobayashi – New Frontiers in Mining Complex Patterns, 2014 – Springer … 4.2 Speech Acoustic Features. Here the unit of analysis is not the utterances but the segment. Thus, we extracted segment-level features. We used the baseline feature set in the openSMILE/openEAR software [ 12 ]. The features set is named the “emobase” set. … Related articles All 3 versions

Emotion recognition in the wild challenge 2014: Baseline, data and protocol A Dhall, R Goecke, J Joshi, K Sikka… – Proceedings of the 16th …, 2014 – dl.acm.org … The features are extracted using the open-source Emotion and Affect Recognition (openEAR) [8] toolkit backend openS- MILE [9]. The feature set consists of 34 energy & spectral related low-level descriptors (LLD) × 21 functionals, 4 voicing re- lated LLD × 19 functionals, 34 … Cited by 9 Related articles All 4 versions

Designing an emotion detection system for a socially intelligent human-robot interaction C Chastagnol, C Clavel, M Courgeon… – Natural Interaction with …, 2014 – Springer … the Anger, Happiness, and Sadness classes. We then extracted acoustic parameters from the instances using the openEAR toolkit with the Interspeech 2009 feature set, containing 384 features [16]. We used the resulting set … Cited by 6 Related articles All 6 versions

Prosodic, spectral and voice quality feature selection using a long-term stopping criterion for audio-based emotion recognition M Kachele, D Zharkov, S Meudt… – … (ICPR), 2014 22nd …, 2014 – ieeexplore.ieee.org … Instead of solely relying on one or few of the mentioned feature types, emotion recognition frameworks like openEAR [25] can be employed to compute several thousand features and achieve high classification rates on benchmark datasets. … Cited by 3 Related articles

Comparison of ZigBee Replay Attacks Using a Universal Software Radio Peripheral and USB Radio SD Dalrymple – 2014 – DTIC Document Page 1. COMPARISON OF ZIGBEE REPLAY ATTACKS USING A UNIVERSAL SOFTWARE RADIO PERIPHERAL AND USB RADIO THESIS Scott D. Dalrymple, Captain, USAF AFIT-ENG-14-M-23 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY … Related articles

Iterative perceptual learning for social behavior synthesis I de Kok, R Poppe, D Heylen – Journal on multimodal user interfaces, 2014 – Springer … found that the production of backchannels is also cued by a short moment of mutual gaze [1,8]. From each speaker’s audio channel, we extracted acoustic features pitch, intensity and the first 12 mel-frequency cep- trum coefficients (MFCC) every 10 ms using OpenEAR [9]. We … Cited by 6 Related articles All 11 versions

Linked Source and Target Domain Subspace Feature Transfer Learning–Exemplified by Speech Emotion Recognition J Deng, Z Zhang, B Schuller – Pattern Recognition (ICPR), …, 2014 – ieeexplore.ieee.org … Thus, the total feature vector per chunk contains 16×2×12 = 384 attributes. To ensure reproducibility as well, the open source openEAR toolkit [22] was used with the pre-defined challenge configuration. … [22] F. Eyben, M. Wollmer, and B. Schuller, “openEAR — Introducing Related articles

Emotion Recognition from Speech and Facial Expression using Bidirectional SG Agrawal, S Dongaonkar – ijfeat.org … parameters (slope, offset, linear/quadratic approximation error), maximum and minimum positions, skewness, kurtosis, quartiles, interquartile ranges, and percentiles. All functionals were calculated using our openEAR toolkit [12]. … Related articles All 2 versions

Human perception of emotions in a model for human-robot interaction V Kirandziska, N Ackovska – proceedings.ictinnovations.org … 2. P. Ekman, Basic emotions, T. Dalgleish and M. Power, eds., Handbook of Cognition and Emotion, New York, Wiley, 1999. 3. F. Eyben, M. Wollmer, and B. Schuller, openEAR – Introducing the Munich Open-Source Emotion and Affect Recognition Toolkit, Proc. … Related articles

Covarep—a collaborative voice analysis repository for speech technologies G Degottex, J Kane, T Drugman… – … , Speech and Signal …, 2014 – ieeexplore.ieee.org … [5] F. Eyben, M. Wöllmer, and B. Schüller, “OpenEAR – Introducing the Munich open-source emotion and affect recognition toolkit,” in Intl Conf. on Affective Comp. and Intell. Interaction and Workshops, 2009, http://www.openaudio.eu. … Cited by 10 Related articles All 8 versions

On the modeling of natural vocal emotion expressions through binary key J Luque, X Anguera – … 2013 Proceedings of the 22nd European, 2014 – ieeexplore.ieee.org … IEEE, vol. 4, pp. 1085–1088, 2007. [7] F. Eyben, M. Wollmer, and B. Schuller, “Openear-2014; intro- ducing the munich open-source emotion and affect recognition toolkit,” in Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. … Related articles All 3 versions

Learning Salient Features for Speech Emotion Recognition Using Convolutional Neural Networks Q Mao, M Dong, Z Huang, Y Zhan – 2014 – ieeexplore.ieee.org Page 1. 1520-9210(c)2013IEEE.Personaluseispermitted,butrepublication/ redistributionrequiresIEEEpermission.See http://www.ieee.org/ publications_standards/publications/rights/index.htmlformoreinformation. … Related articles

Medium-term speaker states—A review on intoxication, sleepiness and the first challenge B Schuller, S Steidl, A Batliner, F Schiel… – Computer Speech & …, 2014 – Elsevier In the emerging field of computational paralinguistics, most research efforts are devoted to either short-term speaker states such as emotions, or long-term tra. Cited by 19 Related articles All 4 versions

Survey on audiovisual emotion recognition: databases, features, and data fusion strategies CH Wu, JC Lin, WL Wei – APSIPA Transactions on Signal …, 2014 – Cambridge Univ Press Page 1. SIP (2014), vol. 3, e12, page 1 of 18 © The Authors, 2014. © This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/3.0/), which … Related articles

Recast: an interactive platform for personal media curation and distribution D Sawada – 2014 – dspace.mit.edu Page 1. Recast: An Interactive Platform for Personal Media Curation and Distribution by Dan Sawada BA, Keio University (2011) Submitted to the Program in Media Arts and Sciences, School of Architecture and Planning, in partial fulfillment of the requirements for the degree of … Related articles All 2 versions

Level of interest sensing in spoken dialog using decision-level fusion of acoustic and lexical evidence JH Jeon, R Xia, Y Liu – Computer Speech & Language, 2014 – Elsevier Automatic detection of a user’s interest in spoken dialog plays an important role in many applications, such as tutoring systems and customer service systems. I. Related articles All 3 versions

Shape-based modeling of the fundamental frequency contour for emotion detection in speech JP Arias, C Busso, NB Yoma – Computer Speech & Language, 2014 – Elsevier This paper proposes the use of neutral reference models to detect local emotional prominence in the fundamental frequency. A novel approach based on functional. Cited by 9 Related articles All 6 versions

Audio onset detection: A wavelet packet based approach with recurrent neural networks E Marchi, G Ferroni, F Eyben… – … Joint Conference on, 2014 – ieeexplore.ieee.org … 92–95, IEEE. [2] F. Eyben, M. Wöllmer, and B. Schuller, “openEAR – Introducing the Munich Open-Source Emotion and Affect Recognition Toolkit,” in Proceedings 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009 … Related articles

Representation of facial expression categories in continuous arousal–valence space: Feature and correlation L Zhang, D Tjondronegoro, V Chandran – Image and Vision Computing, 2014 – Elsevier Representation of facial expressions using continuous dimensions has shown to be inherently more expressive and psychologically meaningful than using categorize. Related articles All 2 versions

[BOOK] Good Leaders Ask Great Questions: Your Foundation for Successful Leadership JC Maxwell – 2014 – books.google.com Page 1. JoHN C. MAXWELL C, OOD LEADERS = AS K = C REA || QUESTIONS Yo URFOUN DAT IONFOR SU CCESSEULLEADERSH 1 p. Page 2. C, OOD LEADERS = ASK = GREAT QUESTIONS JOHN C. MAXWELL STREET Page 3. … Related articles All 2 versions

Beyond Text based sentiment analysis: Towards multi-modal systems S Poria, A Hussain, E Cambria – cs.stir.ac.uk Page 1. Springer Cognitive Computation manuscript No. (will be inserted by the editor) Beyond Text based sentiment analysis: Towards multi-modal systems Soujanya Poria · Amir Hussain · Erik Cambria the date of receipt and acceptance should be inserted later … Related articles

Shape-based modeling of the fundamental frequency contour for emotion detection in speech JP Arias Aparicio, C Busso, N Becerra Yoma – 2014 – captura.uchile.cl Page 1. Available online at www.sciencedirect.com Computer Speech and Language 28 (2014) 278–294 Shape-based modeling of the fundamental frequency contour for emotion detection in speech Juan Pablo Ariasa, Carlos Bussob, Nestor Becerra Yomaa,? …