SAL (Sensitive Artificial Listener) 2017


Notes:

The SEMAINE system was a non-verbally competent Sensitive Artificial Listener.

  • Emotion modeling
  • Semaine corpus
  • Semaine database
  • Semaine dataset
  • Social embodiment
  • Virtual listener

Resources:

Wikipedia:

References:

See also:

100 Best Emotion Recognition Videos | 100 Best Sensitive Artificial Listener Videos


Continuous Facial Expression Recognition For Affective Interaction With Virtual Avatar
Z Shang, J Joshi, J Hoey – pdfs.semanticscholar.org
… The Semaine database [9] provides extensive annotated audio and visual recordings of a person interacting with an emotionally limited avatar, or sensitive artificial listener (SAL), to study natural social behavior in human interac …

Get Your Virtual Hands Off Me!–Developing Threatening IVAs Using Haptic Feedback
L Goedschalk, T Bosse, M Otte – Benelux Conference on Artificial …, 2017 – Springer
… For example, the Sensitive Artificial Listener paradigm enables studying the effect of agents with different personalities on human interlocutors, which provided evidence that IVAs with an angry attitude indeed trigger different (subjective and behavioural) responses than agents …

MAP: Multimodal Assessment Platform for Interactive Communication Competency
SM Khan – pnigel.com
… The model utilizes facial expressions to infer engagement. We applied supervised machine learning methods to facial features extracted from 21 participants in the publically available SEMAINE database (McKeown et al. 2012) of dyadic human interactions …

Circle of Emotions in Life: Emotion Mapping in 2Dimensions
TS Saini, M Bedekar, S Zahoor – … of the 9th International Conference on …, 2017 – dl.acm.org
… A four-factor solution including cohesive-flexibility, chaos, disengagement, and modified enmeshment appeared more suitable [11]. Figure 12, demonstrates that the Sensitive Artificial Listener (SAL) project is involved in ways to elicit different emotions from humans …

Ranking emotional attributes with deep neural networks
S Parthasarathy, R Lotfian… – Acoustics, Speech and …, 2017 – ieeexplore.ieee.org
… The study considered the SEMAINE database, where the labels range from -1 to 1. The optimal margin was t = 0.5 for arousal and t = 0.45 for valence (the annotations for the SEMAINE database does not include dominance) …

Optimizing Filter Size in Convolutional Neural Networks for Facial Action Unit Recognition
S Han, Z Meng, J O’Reilly, J Cai, X Wang… – arXiv preprint arXiv …, 2017 – arxiv.org
… Experimental results on four benchmark AU-coded databases, ie, Cohn-Kanade (CK) [16] databse, FERA2015 SEMAINE database [27], FERA2015 BP4D database [27], and Denver Intensity of Spontaneous Facial Action (DISFA) database [20] have demonstrated that the …

Fatauva-net: An integrated deep learning framework for facial attribute recognition, action unit (au) detection, and valence-arousal estimation
WY Chang, SH Hsu, JH Chien – Proceedings of the IEEE …, 2017 – openaccess.thecvf.com
… 4.3. Result of AU detection As to the training phase for AU detection, we learn AU layers for AU detection from the dataset in FERA2015 [34], and we select 14 AUs (as shown in Table 3) from both BP4D database [43] and SEMAINE database [26] …

EmoLiTe—A database for emotion detection during literary text reading
R Wegener, C Kohlschein, S Jeschke… – … and Demos (ACIIW) …, 2017 – ieeexplore.ieee.org
… research. Previous work done on multi-modal emotion databases in- cludes the SEMAINE database [5]. It is an audiovisual databases which includes recordings between real users and a sensitive artificial listener (SAL). The …

Strength modelling for real-worldautomatic continuous affect recognition from audiovisual signals
J Han, Z Zhang, N Cummins, F Ringeval… – Image and Vision …, 2017 – Elsevier
… The SEMAINE database was recorded in conversations between humans and artificially intelligent agents … For our experiments, the 24 recordings of the Solid-Sensitive Artificial Listener (Solid-SAL) part of the database were used, in which the characters were role-played …

Facial emotion recognition
M Xiaoxi, L Weisi, H Dongyan… – Signal and Image …, 2017 – ieeexplore.ieee.org
… The Matthews Correlation Coefficient (MCC) [33] is used to represent inter-coder reliability. Only AUs where MCC is larger than 0.6 can be selected, as a result, AU2, AU12, AU17, AU25, AU28 and AU45 are selected for implementation from SEMAINE database …

Continuous estimation of emotions in speech by dynamic cooperative speaker models
A Mencattini, E Martinelli, F Ringeval… – IEEE transactions on …, 2017 – ieeexplore.ieee.org
… 1.1 Related Work Recently, databases of emotion collected during natural interactions with time-continuous ratings (eg, arousal and valence [16]) have emerged, such as the Sensitive Artificial Listener (SAL) set in the HUMAINE database [17], the SEM- AINE database [18] and …

Eliciting Positive Emotional Impact in Dialogue Response Selection
N Lubis, S Sakti, K Yoshino, S Nakamura – uni-ulm.de
… In this section, we describe in detail the SEMAINE database and highlight the qualities that make it suitable for our study. The SEMAINE database consists of dialogues between a user and a Sensitive Ar- tificial Listener (SAL) in a Wizard-of-Oz fashion [10] …

A radial base neural network approach for emotion recognition in human speech
L Hussain, I Shafi, S Saeed, A Abbas, IA Awan… – IJCSNS, 2017 – paper.ijcsns.org
… In addition, Sensitive Artificial Listener (SAL) database is used for emotion recognition which contains the natural colour speech and helps the recognition rate from high to low valence state [6]. The multimedia contents from the user are guessed through Bio inspired multimedia …

Prediction-based learning for continuous emotion recognition in speech
J Han, Z Zhang, F Ringeval… – Acoustics, Speech and …, 2017 – ieeexplore.ieee.org
… For instance, the work in [9] has compared the performance of SVR and Bidirectional Long Short-Term RNNs (BLSTM-RNNs) for the con- tinuous prediction of arousal and valence on the Sensitive Artificial Listener database, and the results indicate that the latter performed …

Year of Publication: 2017
S Malwatkar, R Sugandhi, AR Mahajan – pdfs.semanticscholar.org
… 10. McKeown, Gary, et al., “The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent,” IEEE Transactions on Affective Computing 3.1, 2012, pp 5-17. 11 …

Recognizing Emotionally Coloured Dialogue Speech Using Speaker-Adapted DNN-CNN Bottleneck Features
K Mukaihara, S Sakti, S Nakamura – International Conference on Speech …, 2017 – Springer
… specifically to address the task of achieving emotion-rich interaction with an automatic agent, called a sensitive artificial listener, (SAL … 947–952 (2000)Google Scholar. 8. McKeown, G., Valstar, M., Cowie, R., Pantic, M., Schroder, M.: The SEMAINE database: annotated multimodal …

Affectively Aligned Assistive Technology for Persons with Dementia
J Joshi, A König, Z Shang, JM Robillard, LE Francis… – pdfs.semanticscholar.org
… An open-source toolkit2 is used for face registration and landmark detection. Contin- uous labeled data from the SEMAINE database3 is used for training the 3D vector in EPA space. This component outputs the emotion of the user …

The NoXi database: multimodal recordings of mediated novice-expert interactions
A Cafaro, J Wagner, T Baur, S Dermouche… – Proceedings of the 19th …, 2017 – dl.acm.org
… A large audio-visual dataset created originally as part of an iterative approach to building virtual agents that can engage a person in a sustained and emotionally-colored conversation, the SEMAINE dataset was collected using the Sensitive Artificial Listener (SAL) paradigm …

Building naturalistic emotionally balanced speech corpus by retrieving emotional speech from existing podcast recordings
R Lotfian, C Busso – IEEE Transactions on Affective Computing, 2017 – ieeexplore.ieee.org
… of this technique include creating hypothetical situations (IEMOCAP database [10]), conversa- tion over video conference while completing a collaborative task (RECOLA database [11]) or eliciting emotions with sen- sitive artificial listener (SAL) (SEMAINE database [12], [13]) …

Comparative Study on Normalisation in Emotion Recognition from Speech
R Böck, O Egorow, I Siegert, A Wendemuth – International Conference on …, 2017 – Springer
… emotions. The quality of emotional content spans a much broader variety than in emoDB. The Belfast Sensitive Artificial Listener (SAL) (cf. [6]) corpus contains 25 audio-visual recordings from four speakers (two female). The …

Automatic generation of actionable feedback towards improving social competency in job interviews
SK Nambiar, R Das, S Rasipuram… – Proceedings of the 1st …, 2017 – dl.acm.org
… over a human peer [21]. The Sensitive Artificial Listener(SAL) is a real time interactive mul- timodal dialogue system that focuses primarily on emotional and non-verbal interaction capabilities[19]. The SAL system uses visual …

Continuous Affect Recognition with Different Features and Modeling Approaches in Evaluation-Potency-Activity Space
Z Shang – 2017 – uwspace.uwaterloo.ca
… This database provides extensive annotated audio and visual recordings of a person interacting with an emotionally limited agent, or sensitive artificial listener (SAL), to study natural social behavior in human interaction. The Semaine database provides three SAL scenarios …

AFEW-VA database for valence and arousal estimation in-the-wild
J Kossaifi, G Tzimiropoulos, S Todorovic… – Image and Vision …, 2017 – Elsevier
… landmarks. We subsequently evaluate a number of common baseline and state-of-the-art methods on both a commonly used laboratory recording dataset (Semaine database) and the newly proposed recording set (AFEW-VA) …

How is emotion change reflected in manual and automatic annotations of different modalities?
Y Xiang – 2017 – essay.utwente.nl
… Dr.ir.D.Reidsma Prof.dr.DKJ Heylen Page 2. ABSTRACT The SEMAINE database consists of recordings of persons talking to different virtual characters … MDQ – maxima dispersion quotient SAL – sensitive artificial listener QA – quantitive agreement GCI – Glottal Closure Instants …

Aff-wild: Valence and arousal in-the-wild challenge
S Zafeiriou, D Kollias, MA Nicolaou… – IEEE CVPR …, 2017 – openaccess.thecvf.com
… scenario [5] and the SE- MAINE [27] corpus which contains recordings of subjects interacting with a Sensitive Artificial Listener (SAL) under … The semaine database: Annotated multimodal records of emotionally colored conversations between a per- son and a limited agent …

Capturing Feature and Label Relations Simultaneously for Multiple Facial Action Unit Recognition
S Wang, S Wu, G Peng, Q Ji – IEEE Transactions on Affective …, 2017 – ieeexplore.ieee.org
… Finally, we have added AU and expression recognition experiments with expression assisted on the CK+ database and the SEMAINE database to demonstrate the effectiveness of the extended method by capturing the relations among AUs and expressions …

Multi-task deep neural network with shared hidden layers: Breaking down the wall between emotion representations
Y Zhang, Y Liu, F Weninger… – Acoustics, Speech and …, 2017 – ieeexplore.ieee.org
… The Belfast Sensitive Artificial Listener (SAL) is a subset of the HUMAINE database [28, 29] containing audiovi- sual recordings from natural human-computer conversations. The SmartKom (Smart) corpus [30] features spontaneous speech produced 4991 Page 3 …

Affect-lm: A neural language model for customizable affective text generation
S Ghosh, M Chollet, E Laksana, LP Morency… – arXiv preprint arXiv …, 2017 – arxiv.org
… matic Stress Disorder). SEMAINE dataset: SEMAINE (McKeown et al., 2012) is a large audiovisual corpus consisting of interactions between subjects and an operator simulating a SAL (Sensitive Artificial Listener). There are a …

The JESTKOD database: an affective multimodal database of dyadic interactions
E Bozkurt, H Khaki, S Keçeci, BB Türker… – Language Resources …, 2017 – Springer
… Cowie et al. 2007). The SEMAINE database consists of audio–visual data in the form of conversations between participants and a number of virtual characters with particular personalities (McKeown et al. 2012). The acted audio …

A facial-expression monitoring system for improved healthcare in smart cities
G Muhammad, M Alsulaiman, SU Amin… – IEEE …, 2017 – ieeexplore.ieee.org
… Emo- tional Speech (DES) database, the EMO-DB, the eNTER- FACE (eNTER), the Airplane Behaviour Corpus (ABC), the Speech Under Simulated and Actual Stress (SUSAS) database, the Audiovisual Interest Corpus (AVIC), the Belfast Sensitive Artificial Listener (SAL), the …

The cost of dichotomizing continuous labels for binary classification problems: Deriving a Bayesian-optimal classifier
S Mariooryad, C Busso – IEEE Transactions on Affective …, 2017 – ieeexplore.ieee.org
… Fig. 1 gives the distributions of the average values for the emotional attributes corresponding to arousal (calm versus active) and valence (negative versus positive), for the utter- ances in the SEMAINE database [6]. These labels are derived across multiple evaluators …

A Two-Stage Hierarchical Bilingual Emotion Recognition System Using a Hidden Markov Model and Neural Networks
M Deriche – Arabian Journal for Science and Engineering, 2017 – Springer
Page 1. Arab J Sci Eng DOI 10.1007/s13369-017-2742-5 RESEARCH ARTICLE – ELECTRICAL ENGINEERING A Two-Stage Hierarchical Bilingual Emotion Recognition System Using a Hidden Markov Model and Neural Networks Mohamed Deriche1 · Ahmed H. Abo absa1 …

Facial action units for presentation attack detection
S Pan, F Deravi – Emerging Security Technologies (EST), 2017 …, 2017 – ieeexplore.ieee.org
… The TAUD Action Unit detector [20], also uses a pre-trained model which trained and evaluated using a subset of the SEMAINE database. This dataset is much smaller than the one used to train the OpenFace AU detector [19] …

1 Machine Learning Methods for Social Signal Processing
O Rudovic, MA Nicolaou, V Pavlovic – Social Signal Processing, 2017 – books.google.com
… Ekman, Friesen, & Press, 1975) to one of the modern databases on affect, the SEMAINE database (McKeown et al … other examples of databases which incorporate continuous annotations include the Belfast Naturalistic Database, the Sensitive Artificial Listener (Douglas-Cowie …

A longitudinal database of Irish political speech with annotations of speaker ability
A Cullen, N Harte – Language Resources and Evaluation – Springer
… 2012). Both the SEMAINE database and the 3D Corpus of Spontaneous Mental States look for emotion in dyadic interactions (McKeown et al. 2012; Mahmoud et al … 2011). For example, power is the least reliably labelled dimension in the SEMAINE database (McKeown et al …

Harnessing AI for Augmenting Creativity: Application to Movie Trailer Creation
JR Smith, D Joshi, B Huet, W Hsu, J Cota – Proceedings of the 2017 …, 2017 – dl.acm.org
… There is almost as many feature sets are there are datasets used for its training (Berlin Speech Emotion Data- base [4], eNTERFACE [24], Airplane Behaviour Corpus [28], Audio Visual Interest Corpus [30], Belfast Sensitive Artificial Listener [8] and Vera-Am-Mittag [13]) …

E mo A ssist: emotion enabled assistive tool to enhance dyadic conversation for the blind
A Rahman, ASMI Anam, M Yeasin – Multimedia Tools and Applications, 2017 – Springer
… The Solid-SAL dataset is the major portion of the SEMAINE database [21] that consists of audiovisual interaction between a human and an operator undertaking the role of an agent (Sensitive Artificial Listener) with four personalities: Poppy (happy), Obadiah (gloomy), Spike …

Emotion Recognition from Speech
A Wendemuth, B Vlasenko, I Siegert, R Böck… – Companion …, 2017 – Springer
… They used four emotional terms to discern emotions. The annotation process was conducted by nine labelers assessing complete utterances. The Belfast Sensitive Artificial Listener corpus (SAL) is built from emotionally colored multimodal conversations …

A Content Analysis Of The Research Approaches In Speech Emotion Recognition
T Özseven, M Dü?enci, A Durmu?o?lu – ijesrt.com
… 7 Linguistic Data Consortium, University of Pennsylvania, USA. 8 The SEMAINE database was collected for the SEMAINE-project by Queen’s University Belfast. 9 Department of Electronic Systems, Aalborg University, Denmark. 10 TCTS Lab. of Faculte Polytechnique de Mons …

Shared acoustic codes underlie emotional communication in music and speech—Evidence from deep transfer learning
E Coutinho, B Schuller – PloS one, 2017 – journals.plos.org
… In the speech domain, [22] applied LSTM-RNNs to the estimation of Arousal and Valence from a subset of natural speech recordings from the SEMAINE database [23] (see also [24]) … Speech. DAE pre-training: Semaine database …

Statistical Selection of CNN-Based Audiovisual Features for Instantaneous Estimation of Human Emotional States
R Basnet, MT Islam, T Howlader, SM Rahman… – arXiv preprint arXiv …, 2017 – arxiv.org
… seconds as per the frame-rate. Thus, the proposed instantaneous emotion prediction technique can be effective in developing real-time sensitive artificial listener (SAL) agents. 6. Conclusion Automatic prediction of emotional states …

Do you speak to a human or a virtual agent? automatic analysis of user’s social cues during mediated communication
M Ochs, N Libermann, A Boidin… – Proceedings of the 19th …, 2017 – dl.acm.org
… and generic behavioral feedbacks [21]. A voice synthesizer 4Note that corpus including human-human and human-agent interactions already exist (such as the SEMAINE database [18]). We have created our own database …

AMIGOS: A dataset for Mood, personality and affect research on Individuals and GrOupS
JA Miranda-Correa, MK Abadi, N Sebe… – arXiv preprint arXiv …, 2017 – arxiv.org
… is the Sustained Emo- tionally Colored Machine-human Interaction using Non- verbal Expression (SEMAINE) database [8]. It consists of high-quality, multimodal recordings of 150 participants in emotionally colored conversations in a sensitive artificial listener (SAL) configuration …

Computational Study of Primitive Emotional Contagion in Dyadic Interactions
I HUPONT, C Clavel… – IEEE Transactions on …, 2017 – ieeexplore.ieee.org
Page 1. 1949-3045 (c) 2017 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …

Learning representations of emotional speech with deep convolutional generative adversarial networks
J Chang, S Scherer – Acoustics, Speech and Signal Processing …, 2017 – ieeexplore.ieee.org
… [7] Gary McKeown, Michel Valstar, Roddy Cowie, Maja Pantic, and Marc Schroder, “The semaine database: Annotated mul- timodal records of emotionally colored conversations between a person and a limited agent,” IEEE Transactions on Affective Computing, vol. 3, no. 1, pp …

Dynamic behavior analysis via structured rank minimization
C Georgakis, Y Panagakis, M Pantic – International Journal of Computer …, 2017 – Springer
Page 1. Int J Comput Vis DOI 10.1007/s11263-016-0985-3 Dynamic Behavior Analysis via Structured Rank Minimization Christos Georgakis1 · Yannis Panagakis1,2 · Maja Pantic1,3 Received: 15 March 2016 / Accepted: 21 December 2016 © The Author(s) 2017 …

A review of affective computing: From unimodal analysis to multimodal fusion
S Poria, E Cambria, R Bajpai, A Hussain – Information Fusion, 2017 – Elsevier

A multi-task learning framework for emotion recognition using 2D continuous space
R Xia, Y Liu – IEEE Transactions on Affective Computing, 2017 – ieeexplore.ieee.org
… 4.2 SEMAINE This corpus is developed under a scenario called ‘Sensitive Artificial Listener’ [32].2 This scenario is designed to gener- ate a conversation between a user and an operator. The operator is either another human or a virtual character …

Speech emotion recognition using derived features from speech segment and kernel principal component analysis
M Charoendee, A Suchato… – Computer Science and …, 2017 – ieeexplore.ieee.org
… 65, pp. 1964-1987, 2014. [4] R. Calix, M. Khazaeli, L. Javadpour, and G. Knapp, “Dimensionality Reduction and Classification Analysis on the Audio Section of the SEMAINE Database,” in Affective Computing and Intelligent Interaction. vol …

Establishing Ground Truth on Pyschophysiological Models for Training Machine Learning Algorithms: Options for Ground Truth Proxies
K Brawner, MW Boyce – International Conference on Augmented Cognition, 2017 – Springer
… 2155–2166 (2012)Google Scholar. 23. McKeown, G., Valstar, M., Cowie, R., Pantic, M., Schroder, M.: The semaine database: annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Trans. Affect. Comput …

Multimodal Emotion Recognition for Human-Computer Interaction: A Survey
M Mukeshimana, X Ban, N Karani, R Liu – System – pdfs.semanticscholar.org
… SEMAINE corpus Audio visual Induced A person converse with a limited agent [143] SAL Data Set Audio Visual Induced People react to the Sensitive Artificial Listener [35] SmartKom dataset Audio Spontaneous Spontaneous speech and natural emo- tions for German and …

Human Affect Recognition System based on Survey of Recent Approaches
S Malwatkar, R Sugandhi… – International Journal of …, 2017 – search.proquest.com
… Engineering 3.1 2013, pp. 180-185. [9] Retrieved Oct 19, 2016, from. http://kahlan.eps.surrey. ac.uk/savee/. [10] McKeown, Gary, et al., “The semaine database: Annotated multimodal records of emotionally colored. conversations between a person and a limited agent,” …

BAUM-1: A Spontaneous Audio-Visual Face Database of Affective and Mental States
S Zhalehpour, O Onder, Z Akhtar… – IEEE Transactions on …, 2017 – ieeexplore.ieee.org
… 17], [18], [19]. The SEMAINE [17] database was collected under constrained lab settings and contains naturalistic expressions from 150 subjects, who are in a conversation with a “sensitive artificial listener”. Another induced …

MSP-IMPROV: An acted corpus of dyadic interactions to study emotion perception
C Busso, S Parthasarathy, A Burmania… – IEEE Transactions …, 2017 – ieeexplore.ieee.org
… include record- ings of kids interacting with robots [28], using wizard of oz (WOZ) methods during human-machine interaction [29], [30], [31], [32], requesting the subjects to recall personal emotional experiences [33], inducing emotions with sensitive artificial listener (SAL) [34 …

Automatic prediction of impressions in time and across varying context: Personality, attractiveness and likeability
O Celiktutan, H Gunes – IEEE Transactions on Affective …, 2017 – ieeexplore.ieee.org
… naturalistic scenario. We took into account 10 different subjects. Each subject interacts with three Sensitive Artificial Listener (SAL) agents, namely, Poppy, Obadiah and Spike, resulting in 30 video recordings. To reduce the …

Affect And Believability In Game Characters–A Review Of The Use Of Affective Computing In Games
S Hamdy, D King – researchgate.net
Page 1. © EUROSIS-ETI AFFECT AND BELIEVABILITY IN GAME CHARACTERS – A REVIEW OF THE USE OF AFFECTIVE COMPUTING IN GAMES Salma Hamdy and David King Division of Computing and Mathematics AMG …

From Hard to Soft: Towards more Human-like Emotion Recognition by Modelling the Perception Uncertainty
J Han, Z Zhang, M Schmitt, M Pantic… – Proceedings of the 2017 …, 2017 – dl.acm.org
Page 1. From Hard to So : Towards more Human-like Emotion Recognition by Modelling the Perception Uncertainty Jing Han ? Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany jing.han@informatik.uni-augsburg. de …

Fusion of Valence and Arousal Annotations through Dynamic Subjective Ordinal Modelling
A Ruiz, O Martinez, X Binefa… – Automatic Face & …, 2017 – ieeexplore.ieee.org
Page 1. Fusion of Valence and Arousal Annotations through Dynamic Subjective Ordinal Modelling Adria Ruiz, Oriol Martinez, Xavier Binefa and Federico M. Sukno Department of Information and Communications Technologies. Universitat Pompeu Fabra, Spain …

Development of simulated emotion speech database for excitation source analysis
D Pravena, D Govind – International Journal of Speech Technology, 2017 – Springer
The work presented in this paper is focused on the development of a simulated emotion database particularly for the excitation source analysis. The presence of simultaneous electroglottogram (EGG) rec.

Interactive narration with a child: impact of prosody and facial expressions
O ?erban, M Barange, S Zojaji, A Pauchet… – Proceedings of the 19th …, 2017 – dl.acm.org
Page 1. Interactive Narration with a Child: Impact of Prosody and Facial Expressions Ovidiu S, erban Normandie Univ, INSA Rouen Normandie, LITIS 76800 Saint-Étienne-du-Rouvray France Mukesh Barange Normandie Univ …

CHEAVD: a Chinese natural emotional audio–visual database
Y Li, J Tao, L Chao, W Bao, Y Liu – Journal of Ambient Intelligence and …, 2017 – Springer
This paper presents a recently collected natural, multimodal, rich-annotated emotion database, CASIA Chinese Natural Emotional Audio–Visual Database (CHEAVD), which aims to provide a basic resource fo.

Developing a benchmark for emotional analysis of music
A Aljanaki, YH Yang, M Soleymani – PloS one, 2017 – journals.plos.org
Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data …

BNU-LSVED 2.0: Spontaneous multimodal student affect database with multi-dimensional labels
Q Wei, B Sun, J He, L Yu – Signal Processing: Image Communication, 2017 – Elsevier
Skip to main content …

Audio-Facial Laughter Detection in Naturalistic Dyadic Conversations
BB Turker, Y Yemez, TM Sezgin… – IEEE Transactions on …, 2017 – ieeexplore.ieee.org
Page 1. Audio-Facial Laughter Detection in Naturalistic Dyadic Conversations Bekir Berker Turker , Yucel Yemez, T. Metin Sezgin, and Engin Erzin, Senior Member, IEEE Abstract—We address the problem of continuous laughter …

Continuous affect prediction using eye gaze
J O’Dwyer, R Flynn, N Murray – Signals and Systems …, 2017 – ieeexplore.ieee.org
… 3–10. [4] G. McKeown, M. Valstar, R. Cowie, M. Pantic, and M. Schroder, “The SEMAINE Database: Annotated Multimodal Records of Emotionally Colored Conversations between a Person and a Limited Agent,” IEEE Transactions on Affective Computing, vol. 3, no. 1, pp …

Analysis of facial expressions in parkinson’s disease through video-based automatic methods
A Bandini, S Orlandi, HJ Escalante… – Journal of neuroscience …, 2017 – Elsevier
Skip to main content …

Pose-independent facial action unit intensity regression based on multi-task deep transfer learning
Y Zhou, J Pi, BE Shi – … Face & Gesture Recognition (FG 2017) …, 2017 – ieeexplore.ieee.org
Page 1. Pose-independent Facial Action Unit Intensity Regression Based on Multi-task Deep Transfer Learning Yuqian ZHOU, Jimin PI, and Bertram E. SHI Department of Electronic and Computer Engineering The Hong Kong …

Affectnet: A database for facial expression, valence, and arousal computing in the wild
A Mollahosseini, B Hasani, MH Mahoor – arXiv preprint arXiv:1708.03985, 2017 – arxiv.org
Page 1. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 1 AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild Ali Mollahosseini, Student Member, IEEE, Behzad Hasani, Student Member …

Continuous real-time annotation fusion correction via rank-based spatial warping
BM Bootha, K Mundnicha, SS Narayanana – brandonmbooth.net
… McKeown, G., Valstar, M., Cowie, R., Pantic, M., Schroder, M., 2012. The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Transactions on Affective Computing 3, 5–17 …

Personality Perception of robot avatar Teleoperators in solo and Dyadic Tasks
PA Bremner, O Celiktutan, H Gunes – Frontiers in Robotics and AI, 2017 – frontiersin.org
Humanoid robot avatars are a potential new tele-communication tool whereby a user is remotely represented by a robot that replicates their arm and head movements. They have been shown to have a number of benefits over more traditional media such as phones or video calls …

Modelling Engagement in Multi-Party Conversations
C Oertel – KTH, Stockholm, Sweden, 2017 – pdfs.semanticscholar.org
Page 1. Modelling Engagement in Multi-Party Conversations Data-Driven Approaches to Understanding Human- Human Communication Patterns for Use in Human-Robot Interactions CATHARINE OERTEL Doctoral Thesis Page 2. Stockholm, Sweden …

Annotating and modeling empathy in spoken conversations
F Alam, M Danieli, G Riccardi – Computer Speech & Language, 2017 – Elsevier
Empathy, as defined in behavioral sciences, expresses the ability of human beings to recognize, understand and react to emotions, attitudes and beliefs of other.

I Probe, Therefore I Am: Designing a Virtual Journalist with Human Emotions
KK Bowden, T Nilsson, CP Spencer, K Cengiz… – arXiv preprint arXiv …, 2017 – arxiv.org
… [28] G. McKeown, M. Valstar, R. Cowie, M. Pantic, and M. Schroder, “The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent,” IEEE Transactions on Affective Computing, vol. 3, no. 1, pp. 5–17, 2012.

Cross-corpus acoustic emotion recognition with multi-task learning: Seeking common ground while preserving differences
B Zhang, EM Provost, G Essl – IEEE Transactions on Affective …, 2017 – ieeexplore.ieee.org
Page 1. 1949-3045 (c) 2016 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …

Facial Actions as Social Signals
M Valstar, S Zafeiriou, M Pantic – Social Signal Processing, 2017 – books.google.com
Page 136. 11 Facial Actions as Social Signals Michel Valstar, Stefanos Zafeiriou, and Maja Pantic According to a recent survey on social signal processing (Vinciarelli, Pantic, & Bourlard, 2009), next-generation computing needs …

Multimodal Human-Human-Robot Interactions (MHHRI) Dataset for Studying Personality and Engagement
O Celiktutan, E Skordos… – IEEE Transactions on …, 2017 – ieeexplore.ieee.org
Page 1. 1949-3045 (c) 2017 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …

Sayette group formation task (GFT) spontaneous facial expression database
JM Girard, WS Chu, LA Jeni… – Automatic Face & Gesture …, 2017 – ieeexplore.ieee.org
Page 1. Sayette Group Formation Task (GFT) Spontaneous Facial Expression Database Jeffrey M. Girard 1 , Wen-Sheng Chu 2 , László A. Jeni 2 , Jeffrey F. Cohn 1,2 , Fernando De la Torre 2 , and Michael A. Sayette 1 1 Department …

Field Studies with Multimedia Big Data: Opportunities and Challenges (Extended Ver
MM Krell, J Bernd, Y Li, D Ma, J Choi… – arXiv preprint arXiv …, 2017 – arxiv.org
Page 1. arXiv:1712.09915v1 [cs.MM] 28 Dec 2017 Page 2. Field Studies with Multimedia Big Data: Opportunities and Challenges (Extended Version) Mario Michael Krell, Julia Bernd, Yifan Li, Daniel Ma, Jaeyoung Choi, Michael Ellsworth, Damian Borth, and Gerald Friedland …

The Ordinal Nature of Emotions
GN Yannakakis, R Cowie, C Busso – Int. Conference on Affective …, 2017 – yannakakis.net
Page 1. 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII) The Ordinal Nature of Emotions Georgios N. Yannakakis University of Malta georgios.yannakakis@um.edu.mt Roddy Cowie Queen’s University, Belfast r.cowie@qub.ac …

The Indian spontaneous expression database for emotion recognition
SL Happy, P Patnaik, A Routray… – IEEE Transactions on …, 2017 – ieeexplore.ieee.org
Page 1. The Indian Spontaneous Expression Database for Emotion Recognition SL Happy, Student Member, IEEE, Priyadarshi Patnaik, Aurobinda Routray, Member, IEEE, and Rajlakshmi Guha Abstract—Automatic recognition …

A learned emotion space for emotion recognition and emotive speech synthesis
Z Hodari – pdfs.semanticscholar.org
Page 1. A learned emotion space for emotion recognition and emotive speech synthesis Zack Hodari T H E U NIVER S I T Y O F E DI NBU R G H Master of Science by Research Centre for Doctoral Training in Data Science School of Informatics University of Edinburgh 2017 …

Automatic analysis of facial actions: A survey
B Martinez, MF Valstar, B Jiang… – IEEE Transactions on …, 2017 – ieeexplore.ieee.org
Page 1. 1949-3045 (c) 2017 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …

Automatic Sentiment Detection in Naturalistic Audio
L Kaushik, A Sangwan… – IEEE/ACM Transactions …, 2017 – ieeexplore.ieee.org
Page 1. 1668 IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 25, NO. 8, AUGUST 2017 Automatic Sentiment Detection in Naturalistic Audio Lakshmish Kaushik, Abhijeet Sangwan, Member, IEEE, and John HL Hansen, Fellow, IEEE …

Action units and their cross-correlations for prediction of cognitive load during driving
A Yüce, H Gao, GL Cuendet… – IEEE Transactions on …, 2017 – ieeexplore.ieee.org
Page 1. Action Units and Their Cross-Correlations for Prediction of Cognitive Load during Driving An?l Y€uce, Student Member, IEEE, Hua Gao, Gabriel L. Cuendet, Student Member, IEEE, and Jean-Philippe Thiran, Senior Member, IEEE …

Speech Emotion Recognition Using Deep Convolutional Neural Network and Discriminant Temporal Pyramid Matching
S Zhang, S Zhang, T Huang… – IEEE Transactions on …, 2017 – ieeexplore.ieee.org
Page 1. 1520-9210 (c) 2017 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …

A User Perception–Based Approach to Create Smiling Embodied Conversational Agents
M Ochs, C Pelachaud, G Mckeown – ACM Transactions on Interactive …, 2017 – dl.acm.org
Page 1. 4 A User Perception–Based Approach to Create Smiling Embodied Conversational Agents MAGALIE OCHS, Aix Marseille Université, CNRS, ENSAM, Université de Toulon, LSIS, Marseille, France CATHERINE PELACHAUD …

(Visited 101 times, 1 visits today)