SAL (Sensitive Artificial Listener) 2014


See also:

100 Best Emotion Recognition VideosAffective Computing & Dialog Systems 2014 | Best Sensitive Artificial Listener VideosEmotional Agents 2013Emotional Agents 2014 | SAL (Sensitive Artificial Listener) 2015


Continuous prediction of perceived traits and social dimension in space and time O Celiktutan, H Gunes – Proc. of IEEE Int. Conf. on Image …, 2014 – eecs.qmul.ac.uk … This database contains record- ings of human subjects interacting with virtual agents in a naturalistic scenario. We took into account 10 different sub- jects, each communicating with 3 semi-automatic Sensitive Artificial Listener (SAL) agents, namely, Poppy, Obadiah and Spike. … Cited by 2 Related articles

Emotion and its triggers in human spoken dialogue: Recognition and analysis N Lubis, S Sakti, G Neubig, T Toda, A Purwarianti… – Proc IWSDS, 2014 – isw3.naist.jp … The SEMAINE Database consists of dialogue between a user and an operator using Sensitive Artificial Listener (SAL) scenario, where op- erator shows colorful emotional expressions and manages basic aspects of conver- sation, such as turn taking and back-channeling … Cited by 1 Related articles All 3 versions

Comparative Analysis of verbal alignment in human-human and human-agent interactions S Campano, J Durand, C Clavel – Language Resources and …, 2014 – lrec-conf.org … 2. Corpora Overview 2.1. SEMAINE and CID The SEMAINE database is a corpus of emotionally coloured conversations, taking place in a machine-human scenario called the Sensitive Artificial Listener (McKeown et al., 2010). … Cited by 2 Related articles

Fusion of visible and thermal images for facial expression recognition S Wang, S He, Y Wu, M He, Q Ji – Frontiers of Computer Science, 2014 – Springer … In: Proceedings of the 9th international conference on Multimodal interfaces. 2007, 38–45 CrossRef; Gunes H, Pantic M. Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. … Cited by 2 Related articles All 3 versions

The Role of Physical Embodiment of Humanoid Robot Interaction: Focusing on Backchannel Head Nods in Danish First Meeting Encounters N Segato, A Krogsager, DG Jensen… – HCI International 2014- …, 2014 – Springer … 7502, pp. 404–411. Springer, Heidelberg (2012) 4. Schröder, M., Bevacqua, E., Cowie, R., Eyben, F., Gunes, H., Heylen, D., ter Maat, M., McKeown, G., Pammi, S., Pantic, M., Pelachaud, C., Schuller, B., de Sevin, E., Valstar, M.: Building Autonomous Sensitive Artificial Listeners. … Cited by 1 Related articles

Exploring the Difference of the Impression on Human and Agent Listeners in Active Listening Dialog HH Huang, N Konishi, S Shibusawa… – Intelligent Virtual Agents, 2014 – Springer … mood. For example, when the user looks in bad mood, showing the agent’s concern on the user by saying “Are you OK?” like human do. The SEMAINE project [3,5] was launched to build a Sensitive Artificial Listener (SAL). SAL … Related articles All 2 versions

Building A Naturalistic Emotional Speech Corpus by Retrieving Expressive Behaviors From Existing Speech Corpora S Mariooryad, R Lotfian… – … Annual Conference of …, 2014 – mazsola.iit.uni-miskolc.hu … Variations of this technique include using scripts or hypothetical situations (IEMOCAP database [5]), performing a collaborative task (Recola database [6]) or eliciting emotions with the sensitive artificial listener (SAL) technique, in which a conversation partner plays the role of an … Related articles All 5 versions

Investigating Context Awareness of Affective Computing Systems: A Critical Approach A Vlachostergiou, G Caridakis, S Kollias – Procedia Computer Science, 2014 – Elsevier … Interaction context representation and annotation. References. [1]; E. Douglas-Cowie, R. Cowie, C. Cox, N. Amier, D. Heylen; The sensitive artificial listener: an induction technique for generating emotionally coloured conversation. … Related articles

A New Multi-modal Dataset for Human Affect Analysis H Wei, DS Monaghan, NE O’Connor… – Human Behavior …, 2014 – Springer … human affect recognition systems [3][16][17]. It features continuous annotated audiovisual recordings of emotion- ally coloured conversations, elicited through a Sensitive Artificial Listener(SAL). There are four SALs where each … Related articles All 4 versions

Intra-and Interpersonal Functions of Head Motion in Emotion Communication Z Hammal, JF Cohn – Proceedings of the 2014 Workshop on …, 2014 – dl.acm.org … IEEE Trans. on PAMI, 31(1), 39–58. 2. H. Gunes and M. Pantic. 2010. Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. Intelligent virtual agents, JA e. al., Ed., ed Heidelberg: Springer-Verlag Berlin, 6356, 371–377. … Related articles

BAUM-2: a multilingual audio-visual affective face database CE Erdem, C Turan, Z Aydin – Multimedia Tools and Applications, 2014 – Springer … The SEMAINE [35] database contains nat- uralistic expressions of emotions from 150 subjects who were interacting with a sensitive artificial listener. The video clips have been recorded in a laboratory, which give a lot of control over illumination and other recording conditions. … Related articles

Challenges for Social Embodiment E André – Proceedings of the 2014 Workshop on Roadmapping …, 2014 – dl.acm.org … TiiS, 2(1):4, 2012. 17. M. Schröder, E. Bevacqua, R. Cowie, F. Eyben, H. Gunes, D. Heylen, M. ter Maat, G. McKeown, S. Pammi, M. Pantic, C. Pelachaud, B. Schuller, E. de Sevin, MF Valstar, and M. Wöllmer. Building autonomous sensitive artificial listeners. … Related articles

Using nonverbal videofeatures of players as an indicator of a social bond between deceiving players in a game of Werewolves. D Emmen, S Verhage – Delft University of Technology, 2014 – coendekker.nl … proposed by Heylen [10]. The listening behavior was implemented in a semiautonomous embodied chatrobot that was part of the Sensitive Artificial Listener. The interesting argument that concerns to the Werewolf research is that it addresses … Related articles

Virtual Reflexes CM Jonker, J Broekens, A Plaat – Intelligent Virtual Agents, 2014 – Springer … The real-time aspect of emotion modeling has been addressed in, eg, the work on autonomous real-time sensitive artificial listeners [10, 35, 40], the work on backchan- nel communication [5, 17, 37], to create “rapport” between virtual agents and humans [15, 18, 13], and in … Related articles All 6 versions

How Do You Like Your Virtual Agent?: Human-Agent Interaction Experience through Nonverbal Features and Personality Traits A Cerekovic, O Aran, D Gatica-Perez – Human Behavior Understanding, 2014 – Springer … We describe interaction experience through three measures (quality, rapport and likeness) [7]. The virtual agents we use in our study are Sensitive Artificial Listeners (SALs)[17], which are designed with the purpose of induc- ing specific emotional conversation. … Related articles All 8 versions

Steps towards more natural humanmachine interaction via audio-visual word prominence detection M Heckmann – submitted to 2nd Workshop on Multimodal Analyses …, 2014 – arcor.de … INTERSPEECH. Singapore (2014) 35. Schroder, M., Bevacqua, E., Cowie, R., Eyben, F., Gunes, H., Heylen, D., Ter Maat, M., McKeown, G., Pammi, S., Pantic, M., Pelachaud, C., Schuller, B., de Sevin, E., Valstar, M., Wöllmer, M.: Building autonomous sensitive artificial listeners. … Cited by 2 Related articles

An insight into multimodal databases for social signal processing: acquisition, efforts, and directions A ?erekoviæ – Artificial Intelligence Review, 2014 – Springer … In: Proceedings of the international conference on affective computing and intelligent interaction. pp 488–500; Douglas-Cowie E, Cowie R, Cox C, Amir N, Heylen D (2008) The sensitive artificial listener: an induction technique for generating emotionally coloured conversation. … Cited by 1 Related articles All 5 versions

Robust Head Gestures Recognition for Assistive Technology JR Terven, J Salas, B Raducanu – Pattern Recognition, 2014 – Springer … Int. J. Comput. Vision 4 (2001) 11. Gunes, H., Pantic, M.: Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. In: Allbeck, JM, Badler, NI, Bickmore, TW, Pelachaud, C., Safonova, A. (eds.) IVA 2010. LNCS, vol. 6356, pp. … Related articles All 3 versions

Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems K Ruhland, S Andrist, J Badler… – … State-of-the-Art …, 2014 – hal.univ-grenoble-alpes.fr Page 1. Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems Kerstin Ruhland, Sean Andrist, Jeremy Badler, Christopher Peters, Norman Badler, Michael Gleicher, Bilge Mutlu, Rachel Mcdonnell To cite this version: … Cited by 2 Related articles All 16 versions

Ten Challenges in Highly-Interactive Dialog Systems NG Ward, D DeVault – 2014 – cs.utep.edu … Building autonomous sensitive artificial listeners. IEEE Trans. Affective Computing 3:165–183. Schuller, B.; Steidl, S.; Batliner, A.; Burkhardt, F.; Devillers, L.; Müller, C.; and Narayanan, S. 2013. Paralinguistics in speech and language: state-of-the-art and the challenge. … Related articles All 2 versions

Interpreting social cues to generate credible affective reactions of virtual job interviewers H Jones, N Sabouret, I Damian, T Baur, E André… – arXiv preprint arXiv: …, 2014 – arxiv.org … Building autonomous sensitive artificial listeners. T. Affective Computing, 3(2):165–183, 2012. [33] Monika Sieverding. ‘Be Cool!’: Emotional costs of hiding feelings in a job interview. International Journal of Selection and Assessment, 17(4):391–401, 2009. [34] Mark Snyder. … Cited by 1 Related articles All 5 versions

A spoken dialogue system with situation and emotion detection based on anthropomorphic learning for warming healthcare d BH Su, PW Fu, PC Lin, PY Shih, YC Lin… – … (ICOT), 2014 IEEE …, 2014 – ieeexplore.ieee.org … Proc. ISCA Workshop on Speech Synthesis, Bonn, Germany, pp. 294-299, August 22-24, 2007. [4] M. Schroder, “Building autonomous sensitive artificial listeners,” IEEE Transactions on Affective Computing, Vol. 3, no. 2, pp. … Related articles

An Emotion Theory Approach to Artificial Emotion Systems for Robots and Intelligent Systems: Survey and Classification ST Scheuring, A Agah – Journal of Intelligent Systems, 2014 – degruyter.com Jump to ContentJump to Main Navigation: … Related articles All 2 versions

Context-Sensitive Affect Recognition for a Robotic Game Companion G Castellano, I Leite, A Pereira, C Martinho… – ACM Transactions on …, 2014 – dl.acm.org … analysis of gaze patterns. Gunes and Pantic [2010b] built a system for dimensional emotion recognition from spontaneous head gestures displayed during the interaction with a sensitive artificial listener. A few computa- tional … Related articles

Applying a Text-Based Affective Dialogue System in Psychological Research: Case Studies on the Effects of System Behaviour, Interaction Context and Social … M Skowron, S Rank, A ?widerska, D Küster… – Cognitive …, 2014 – Springer Logo Springer. Search Options: … Cited by 1 Related articles All 5 versions

Acoustic emotion recognition based on fusion of multiple feature-dependent deep Boltzmann machines K Poon-Feng, DY Huang, M Dong… – … (ISCSLP), 2014 9th …, 2014 – ieeexplore.ieee.org … the SEMAINE corpus [21]. The SEMAINE corpus contains recordings of conversations between humans and artificially intelligent agents, which in this case is called the Sensitive Artificial Listener (SAL). The recordings involve … Related articles

Gaussian Process Dynamical Models for Emotion Recognition HF García, MA Álvarez, Á Orozco – Advances in Visual Computing, 2014 – Springer … Pattern Recogn. 42, 1340–1350 (2009) 7. Gunes, H., Pantic, M.: Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds.) IVA 2010. LNCS, vol. … Related articles

Backchannel Head Nods in Danish First Meeting Encounters with a Humanoid Robot: The Role of Physical Embodiment A Krogsager, N Segato, M Rehm – Human-Computer Interaction. …, 2014 – Springer … Personality and Individual Differences 13(4), 443–449 (1992) 5. Gunes, H., Heylen, D., ter Maat, M., McKeown, G., Pammi, S., Pantic, M., Pelachaud, C., Schuller, B., de Sevin, E., Valstar, M., Wöllmer, M.: Building Au- tonomous Sensitive Artificial Listeners. … Related articles All 2 versions

Emotion Modeling and Machine Learning in Affective Computing K Kim – mosys.cs.sunykorea.ac.kr … Nicolaou [48] used a kind of Recurrent Neural Network to predict Valence-Arousal (VA, which is the sa- me to PA) changes over time. He used Sensitive Artificial Listener Database [13], which is audiovisual data (audio, face, shoulder). … Related articles

Speaker state classification based on fusion of asymmetric simple partial least squares (SIMPLS) and support vector machines DY Huang, Z Zhang, SS Ge – Computer Speech & Language, 2014 – Elsevier This paper presents our studies of the effects of acoustic features, speaker normalization methods, and statistical modeling techniques on speaker state classif. Cited by 5 Related articles All 5 versions

Don’t Classify Ratings of Affect; Rank them! H Martinez, G Yannakakis, J Hallam – 2014 – ieeexplore.ieee.org … We utilise the version of ? adjusted to account for ties. 4 DATASETS In this paper we evaluate the two alternative mod- elling methods on one synthetic and two dissimilar real datasets: the Maze-Ball (MB) dataset [30] and the Sensitive Artificial Listener (SAL) dataset [42]. … Related articles All 4 versions

Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data F Ringeval, F Eyben, E Kroupi, A Yuce… – Pattern Recognition …, 2014 – Elsevier … prediction for the utterance. Recently, databases with time-continuous ratings have emerged such as the Sensitive Artificial Listener (SAL) set in the HUMAINE database [9], and the SEMAINE database [10]. Such databases have … Related articles All 3 versions

Audio-visual emotion recognition: A dynamic, multimodal approach N Jeremie, R Vincent, B Kevin… – IHM’14, 26e …, 2014 – hal.archives-ouvertes.fr … Motion and Emotion in Student Learning. Principal Leadership, 2003. 8. H. Gunes and M. Pantic. Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. In Proc. of Intelligent Virtual Agents (IVA’10), pages 371{377, 2010. … Related articles All 19 versions

Designing an emotion detection system for a socially intelligent human-robot interaction C Chastagnol, C Clavel, M Courgeon… – Natural Interaction with …, 2014 – Springer Page 1. Chapter 18 Designing an Emotion Detection System for a Socially Intelligent Human-Robot Interaction Clément Chastagnol, Céline Clavel, Matthieu Courgeon, and Laurence Devillers Abstract The long-term goal of … Cited by 6 Related articles All 6 versions

Inferring depression and affect from application dependent meta knowledge M Kächele, M Schels, F Schwenker – Proceedings of the 4th International …, 2014 – dl.acm.org … A quite contrary approach is to allow an interaction with a computer that is as unconstrained as possible. One ex- ample for such a data set is the sensitive artificial listener data set, that has been used for the first two editions of the AVEC competition [51]. … Cited by 4 Related articles All 2 versions

Evaluation Of Syllable Rate Estimation In Expressive Speech And Its Contribution To Emotion Recognition M Abdelwahab, C Busso – ecs.utdallas.edu … different rooms. The emotions were elicited with the sensitive artificial listener (SAL) framework, where the operator assumes four personalities aiming to elicit positive and negative emotional reactions from the user. The data … Related articles

Expressing social attitudes in virtual agents for social training games N Sabouret, H Jones, M Ochs, M Chollet… – arXiv preprint arXiv: …, 2014 – arxiv.org Page 1. Expressing social attitudes in virtual agents for social training games Nicolas Sabouret LIMSI-CNRS, Univ. Paris-Sud nicolas.sabouret@limsi.fr Hazaël Jones Univ. Pierre & Marie Curie hazael.jones@lip6.fr Magalie … Cited by 1 Related articles All 6 versions

Machine Learning Methods for Social Signal Processing O Rudovic, MA Nicolaou, V Pavlovic – ibug.doc.ic.ac.uk … 1.2. 1 Besides SEMAINE, other examples of databases which incorporate continuous annotations include the Belfast Naturalistic Database, the Sensitive Artificial Listener (Douglas-Cowie et al. 2003), (Cowie et al. 2005) as well as the CreativeIT database (Metallinou et al. … Related articles All 2 versions

Interpersonal Coordination of Head Motion in Distressed Couples Z Hammal, JF Cohn, DT George – 2014 – ieeexplore.ieee.org Page 1. 1949-3045 (c) 2013 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This … Cited by 2 Related articles All 5 versions

Affective state level recognition in naturalistic facial and vocal expressions H Meng, N Bianchi-Berthouze – … , IEEE Transactions on, 2014 – ieeexplore.ieee.org … fps), and four microphones. The AVEC 2011 dataset was created from the first 140 operator–user interactions, which constitutes the sensitive artificial listener (Solid-SAL) partition of the SEMAINE corpus. The Solid- SAL partition … Cited by 13 Related articles All 10 versions

Recognizing signals of social attitude in interacting with Ambient Conversational Systems B De Carolis, N Novielli – Journal on Multimodal User Interfaces, 2014 – Springer Page 1. J Multimodal User Interfaces DOI 10.1007/s12193-013-0143-y ORIGINAL PAPER Recognizing signals of social attitude in interacting with Ambient Conversational Systems Berardina De Carolis • Nicole Novielli Received … Cited by 1 Related articles All 2 versions

A framework for the assessment of synthetic personalities according to user perception Z Callejas, D Griol, R López-Cózar – International Journal of Human- …, 2014 – Elsevier … consistency. This work is part of the SEMAINE project, 1 which aimed to build Sensitive Artificial Listeners (SAL), for which personality was one of the aspects considered to build credible agents (Bevacqua et al., 2010). While … Related articles All 5 versions

Correcting time-continuous emotional labels by modeling the reaction lag of evaluators S Mariooryad, C Busso – 2014 – ieeexplore.ieee.org … 15]. This audiovisual database contains recordings from users interacting with an operator. Each operator acts as a sensitive artificial listener (SAL) agent [39], in TABLE 1 Set of action units (AUs) extracted using CERT [40]. … Cited by 2 Related articles All 2 versions

Presentation Skills Estimation Based on Video and Kinect Data Analysis V Echeverría, A Avendaño, K Chiluiza… – Proceedings of the …, 2014 – dl.acm.org Page 1. Presentation Skills Estimation Based on Video and Kinect Data Analysis Vanessa Echeverría, Allan Avendaño, Katherine Chiluiza, Aníbal Vásquez and Xavier Ochoa Escuela Superior Politécnica del Litoral Guayaquil … Related articles

Cross-Lingual Automatic Speech Emotion Recognition BC Chiou – 2014 – etd.lib.nsysu.edu.tw … pus (ABC)?Audiovisual Interest Corpus (AVIC)?Danish Emotional Speech (DES)?Berlin Database of Speech Emotion(EMO-DB)?eNTERFACE?Sensitive Artificial Listener (SAL) ?SmartKom?Speech Under Simulated and Actual Stress (SUSAS)??Vera-Am-Mittag … Related articles

Acquisition of Intercultural Data A Osherenko – Social Interaction, Globalization and Computer-Aided …, 2014 – Springer … This section describes a heuristic to acquiring personality-related data from an emo- tional corpus. Emotion-related data is represented in this book as the audio-visual Sensitive Artificial Listener (SAL) corpus (Douglas-Cowie et al. 2007). … Related articles

Survey on audiovisual emotion recognition: databases, features, and data fusion strategies CH Wu, JC Lin, WL Wei – APSIPA Transactions on Signal …, 2014 – Cambridge Univ Press … 57] and human–computer interaction (HCI) [17, 46]. To this end, the Sensitive Artificial Listeners (SAL) sce- nario [61] was developed from the ELIZA concept intro- duced by Weizenbaum [62]. The SAL scenario can be used to … Related articles

Facial Action Recognition in 2D and 3D M Valstar, S Zafeiriou, M Pantic – Face Recognition in Adverse …, 2014 – books.google.com Page 193. 167 Chapter 8 Facial Action Recognition in 2D and 3D Michel Valstar University of Nottingham, UK Stefanos Zafeiriou Imperial College London, UK Maja Pantic Imperial College London, UK ABSTRACT Automatic … All 2 versions

Predicting when to laugh with structured classification B Piot, O Pietquin, M Geist – InterSpeech 2014, 2014 – hal-supelec.archives-ouvertes.fr … Cowie, Florian Eyben, Hatice Gunes, Dirk Heylen, Mark ter Maat, Garry McKeown, Satish Pammi, Maja Pantic, Catherine Pelachaud, Björn Schuller, Etienne de Sevin, Michel Valstar, and Martin Wöllmer, “Building autonomous sensitive artificial listeners,” IEEE Transactions on … Related articles All 10 versions

Subjective Evaluation of a BDI-based Theory of Mind model AB Youssef, N Sabouret, S Caillou – perso.limsi.fr … Springer, 2013. [20] M. Schröder, E. Bevacqua, R. Cowie, F. Eyben, H. Gunes, D. Heylen, M. ter Maat, G. McKeown, S. Pammi, M. Pantic, C. Pelachaud, B. Schuller, E. de Sevin, MF Valstar, and M. Wöllmer. Building autonomous sensitive artificial listeners. … Related articles All 2 versions

Shape-based modeling of the fundamental frequency contour for emotion detection in speech JP Arias, C Busso, NB Yoma – Computer Speech & Language, 2014 – Elsevier … The emotions are elicited using the Sensitive Artificial Listener (SAL) approach. We consider sessions recorded from ten subjects. The data contains subjective evaluations generated by human raters using Feeltrace ( Cowie et al., 2000). … Cited by 9 Related articles All 6 versions

Effects of off-activity talk in human-robot interaction with diabetic children I Kruijff-Korbayova, E Oleari, I Baroni… – Robot and Human …, 2014 – ieeexplore.ieee.org … 365–377, 2003. [37] M. Schröder, E. Bevacqua, R. Cowie, F. Eyben, H. Gunes, D. Heylen, M. ter Maat, G. McKeown, S. Pammi, M. Pantic, C. Pelachaud, B. Schuller, E. de Sevin, M. Valstar, and M. Wöllmer, “Building au- tonomous sensitive artificial listeners,” IEEE Transactions … Related articles

Automatic Recognition Of Laughter Using Verbal And Non-Verbal Acoustic Features T Jacykiewicz, F Ringeval – 2014 – diuf.unifr.ch … quality audiovisual clips recorded during an emotionally coloured interaction with an avatar, known as a Sensitive Artificial Listener (SAL). 443 instances of conversational and social laughter were extracted from 345 video clips to include in the ILHAIRE database. … Related articles

Enabling Empathic Communication in Ubiquitous Computing Environments to Improve Interaction between People CSS TAN – 2014 – doclib.uhasselt.be Page 1. Enabling Empathic Communication in Ubiquitous Computing Environments to Improve Interaction between People Proefschrift voorgelegd tot het behalen van de graad van Doctor in de Wetenschappen, Informatica, aan de Universiteit Hasselt, te verdedigen door …

Introduction To Emotion Recognition A Konar, A Halder… – Emotion Recognition: A …, 2014 – media.johnwiley.com.au Page 1. JWST523-c01 JWST523-Konar October 21, 2014 9:1 Printer Name: Trim: 6.125in × 9.25in 1 INTRODUCTION TO EMOTION RECOGNITION Amit Konar and Anisha Halder Artificial Intelligence Laboratory, Department … Related articles All 2 versions

Shape-based modeling of the fundamental frequency contour for emotion detection in speech JP Arias Aparicio, C Busso, N Becerra Yoma – 2014 – captura.uchile.cl … The emotions are elicited using the Sensitive Artificial Listener (SAL) approach. We consider sessions recorded from ten subjects. The data contains subjective evaluations generated by human raters using Feeltrace (Cowie et al., 2000). …

Vision and attention theory based sampling for continuous facial emotion recognition A Cruz, B Bhanu, N Thakoor – 2014 – ieeexplore.ieee.org Page 1. 1949-3045 (c) 2013 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This … Cited by 3 Related articles All 3 versions

Recognising Complex Mental States from Naturalistic Human-Computer Interactions H Monkaresi – 2014 – ses.library.usyd.edu.au … QD Quadratic Programming RMSE Root Mean Squared Error ROI Region Of Interest RSP Respiration SAL Sensitive Artificial Listener SAM Self-Assessment Manikin SC Skin Conductivity SDK Software Development Kit SMO Sequential Minimal Optimization … Related articles

The SSPNet-Mobile Corpus: from the detection of non-verbal cues to the inference of social behaviour during mobile phone conversations A Polychroniou – 2014 – theses.gla.ac.uk Page 1. Glasgow Theses Service http://theses.gla.ac.uk/ theses@gla.ac.uk Polychroniou, Anna (2014) The SSPNet-Mobile Corpus: from the detection of non-verbal cues to the inference of social behaviour during mobile phone conversations. PhD thesis. … Related articles All 3 versions

[BOOK] Social Interaction, Globalization and Computer-Aided Analysis: A Practical Guide to Developing Social Simulation A Osherenko – 2014 – books.google.com … Robot Interaction Hidden Markov Model Liquid Crystal Display Linguistic Inquiry and Word Count Multiagent System Natural-Language Prisoner’s Dilemma Product Movement Correlation Coefficient Remote Management Agent Sensitive Artificial Listener Social Interaction … Related articles All 3 versions

A dynamic appearance descriptor approach to facial actions temporal modeling B Jiang, M Valstar, B Martinez… – … , IEEE Transactions on, 2014 – ieeexplore.ieee.org Page 1. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON CYBERNETICS 1 A Dynamic Appearance Descriptor Approach to Facial Actions Temporal Modeling … Cited by 19 Related articles All 9 versions