Facial Recognition & Dialog Systems 2014


  • Emotion corpora
  • Emotion recognition
  • Expression recognition


See also:

100 Best Faceposer Videos | 100 Best Faceshift VideosBest FaceFX Lipsync VideosBest Mixamo FacePlus Videos

The Appliance of Affective Computing in Man-Machine Dialogue: Assisting Therapy of Having Autism L Han, X Li – … Systems and Network Technologies (CSNT), 2014 …, 2014 – ieeexplore.ieee.org … Picard team designed an emotional dialogue system, virtual human “Laura” can communicate with users through text interface experience physical … of dialogue between human and computer ,proposed by the help of Wang Jijun emotional expression recognition based model … Related articles All 2 versions

Multi-spectral facial biometrics in access control K Lai, S Samoil… – … Intelligence in Biometrics …, 2014 – ieeexplore.ieee.org … latter makes use of the experience coming from well-known dialogue systems design such as, in particular, SmartKom [4] that possesses … example, the description from the authorized documents may not match the appearance of the individual, and facial recognition may provide … Related articles

Computer Vision based Attentiveness Detection Methods in E-Learning SA Narayanan, MR Kaimal, K Bijlani… – Proceedings of the …, 2014 – dl.acm.org … markers Negative, Neutral & Positive emotions Comparison of multiple classifiers Physics Intelligent Tutoring Dialogue System ITSPOKE … Automatic facial expression recognition for intelligent tutoring systems, IEEE Computer Vision and Pattern Recognition Workshops, pp. … Related articles

Emotion Recognition in Real-world Conditions with Acoustic and Visual Features M Sidorov, W Minker – Proceedings of the 16th International Conference …, 2014 – dl.acm.org … Such opportunity can be useful in various applications, eg, im- provement of Spoken Dialogue Systems (SDSs) or monitor- ing agents in call … 14, 1], Gabor [13], LPQ, PHOG [6] and their combination [25], have been successfully uti- lized for facial expression recognition and face … Related articles

Vision-based animation of 3D facial avatars T Cho, JH Choi, HJ Kim, SM Choi – 2014 International Conference …, 2014 – computer.org … FACIAL EXPRESSION RECOGNITION For vision-based control of facial animation, face location and major facial features such as eyes … used as a noble interface for many human-computer interaction applications such as tele-presence systems, intelligent dialogue systems, etc. … Related articles All 2 versions

Hybrid Artificial Intelligence Systems MPAC de Carvalho, JSPM Wozniak, HQE Corchado – Springer … 13 Petrica C. Pop, Levente Fuksz, and Andrei Horvat Marc A Framework to Develop Adaptive Multimodal Dialog Systems for Android-Based Mobile Devices….. … Page 15. Table of Contents XV A 3D Facial Recognition System Using Structured Light Projection…. …

[BOOK] Hybrid Artificial Intelligence Systems: 9th International Conference, HAIS 2014, Salamanca, Spain, June 11-13, 2014, Proceedings M Polycarpou, AC De Carvalho, JS Pan, M Wo?niak… – 2014 – books.google.com … Petrica C. Pop, Levente Fuksz, and Andrei Horvat Marc A Framework to Develop Adaptive Multimodal Dialog Systems for Android-Based Mobile Devices….. … Page 15. Table of Contents XV A 3D Facial Recognition System Using Structured Light Projection…. … All 3 versions

Assistive Robot Enabled Service Architecture to Support Home-Based Dementia Care R Khosla, K Nguyen, MT Chu – Service-Oriented Computing …, 2014 – ieeexplore.ieee.org … and management; b) capturing the interactional engagement of the user using a networked-based facial expression recognition system; c … on PC or touch panel) and artificial intelligence programs (eg, emotionally intelligent persuasive diet suggestion dialog system) located and … Related articles

Metalogue: A Multiperspective Multimodal Dialogue System with Metacognitive Abilities for Highly Adaptive and Flexible Dialogue Management J Alexandersson, M Aretoulaki… – Intelligent …, 2014 – ieeexplore.ieee.org … The goal of Metalogue is to produce a multimodal dialogue system that is able to implement an interactive behaviour that seems natural to users … For example, the ability to use facial recognition will be complemented by the generation of facial expressions via virtual characters. … Cited by 1 Related articles All 2 versions

Electrocardiogram-based emotion recognition system using empirical mode decomposition and discrete Fourier transform M Murugappan, K Wan, S Yaacob – Expert Systems, 2014 – Wiley Online Library Skip to Main Content. Wiley Online Library. Log in / Register. Log In E-Mail Address Password Forgotten Password? Remember Me. … Cited by 2 Related articles All 4 versions

Thermal expression of intersubjectivity offers new possibilities to human–machine and technologically mediated interactions A Merla – Frontiers in psychology, 2014 – ncbi.nlm.nih.gov … (2013) recently proposed the robotics dialog system (RDS). … the first one recognizes the object in the field of view (SHORE – Sophisticated High-speed Object Recognition Engine) and the second one the facial expressions (CERT – Computer Expression Recognition Toolbox). … Cited by 1 Related articles All 8 versions

Audiovisual emotion recognition using ANOVA feature selection method and multi-classifier neural networks M Bejani, D Gharavian, NM Charkari – Neural Computing and Applications, 2014 – Springer … Speech emotion recognition system is based on prosody features, mel-frequency ceps- tral coefficients (a representation of the short-term power spectrum of a sound) and facial expression recognition based on integrated time motion image and quantized image matrix, which … Cited by 1 Related articles All 4 versions

Bimodal Emotion Recognition from Speech and Text W Ye, X Fan – International Journal of Advanced Computer Science …, 2014 – thesai.org … Busso et al[3] analyzed the complementarity of speech emotion recognition and facial expression recognition, presented a multi-modal emotion … This method can be applied to a telephone service center dialogue system to recognize customers’ negative emotions, such as anger … Related articles All 3 versions

Minotaurus: A System for Affective Human–Robot Interaction in Smart Environments J Röning, J Holappa, V Kellokumpu, A Tikanmäki… – Cognitive …, 2014 – Springer … For Minotaurus, we adopted a state-of-the-art method for posed expression recognition based on analyzing facial dynamics with spatiotemporal LBP descriptors [26]. … To find the bounding box Fig. 3 Examples of facial expression recognition. … Cited by 2 Related articles

A hierarchical probabilistic framework for recognizing learners’ interaction experience trends and emotions I Jraidi, M Chaouachi, C Frasson – Advances in Human-Computer …, 2014 – dl.acm.org Page 1. Research Article A Hierarchical Probabilistic Framework for Recognizing Learners’ Interaction Experience Trends and Emotions Imène Jraidi, Maher Chaouachi, and Claude Frasson Department of Computer Science … Cited by 1 Related articles All 2 versions

Affective communication robot partners using associative memory with mood congruency effects N Masuyama, M Islam, CK Loo – Robotic Intelligence In …, 2014 – ieeexplore.ieee.org … B. Effect of Human Facial Expression Recognition for Robot Partners Several researches in psychology field have mentioned facial expression and emotions are closely related each other, and it can be regarded as one of the most essential methods to express one’s emotions … Related articles

Emotion recognition from facial EMG signals using higher order statistics and principal component analysis S Jerritta, M Murugappan, K Wan… – Journal of the Chinese …, 2014 – Taylor & Francis … 2004. “Facial Expression Recognition Through Pattern Analysis of Facial Muscle Movements Utilizing Electromyogram Sensors.” In Proceedings of TENCON 2004. 2004 IEEE Region 10 Conference. Chiang Ma, 21–24 November 2004: 3: 600–603. … Related articles All 3 versions

Investigation of Speaker Group-Dependent Modelling for Recognition of Affective States from Speech I Siegert, D Philippou-Hübner, K Hartmann, R Böck… – Cognitive …, 2014 – Springer … Robust acoustic and semantic modeling in a telephone-based spoken dialog system. Ph.D. thesis, Otto von Guericke University Magdeburg; 2009. … CrossRef; Zeng Z, Tu J, Pianfetti BM, Huang TS. Audio–visual affective expression recognition through multistream fused HMM. … Related articles

A Dialogue System for Ensuring Safe Rehabilitation A Papangelis, G Galatas, K Tsiakas… – Universal Access in …, 2014 – Springer … hu- man’s feelings and the interest in these works has focused for example in emergency response systems, where the user cannot signal an emergency (eg by pressing a but- ton) and in emotional expression recognition. … A Dialogue System for Ensuring Safe Rehabilitation … Related articles

Automatic Recognition of Personality Traits: A Multimodal Approach M Sidorov, S Ultes, A Schmitt – Proceedings of the 2014 Workshop on …, 2014 – dl.acm.org … Adding personality-dependency may be useful to build speaker-adaptive models, eg, to improve Spoken Dialogue Systems (SDSs) or to monitor agents in call-centers. … Local gabor binary patterns from three orthogonal planes for automatic facial expression recognition. … Related articles

Audio-visual Keyword Spotting for Mandarin Based on Discriminative Local Spatial-temporal Descriptors H Liu, T Fan, P Wu – Pattern Recognition (ICPR), 2014 22nd …, 2014 – ieeexplore.ieee.org … Therefore, KWS is more suitable for some specific applications such as dialogue systems and has been widely researched in the past … a generic facial expression recognition framework, IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. … Related articles All 4 versions

A Security System by using Face and Speech Detection CS Patil, GN Dhoot – 2014 – inpressco.com … Practical applications of speech recognition and dialogue systems bring sometimes a requirement to synthesize or reconstruct the speech from the saved or … 1-6. P. Michel and R. El Kaliouby, Real Time Facial Expression Recognition in Video Using Support Vector Machines … Related articles

Introduction To Emotion Recognition A Konar, A Halder… – Emotion Recognition: A …, 2014 – media.johnwiley.com.au Page 1. JWST523-c01 JWST523-Konar October 21, 2014 9:1 Printer Name: Trim: 6.125in × 9.25in 1 INTRODUCTION TO EMOTION RECOGNITION Amit Konar and Anisha Halder Artificial Intelligence Laboratory, Department … Related articles All 2 versions

Modelling User Experience in Human-Robot Interactions K Jokinen, G Wilcock – Multimodal Analyses enabling Artificial Agents in …, 2014 – Springer … In the evaluation of the multimodal system SmartKom [3], the task-based spoken dialogue system evaluation framework Paradise [19 … By abstracting over functionally similar modality technologies that can be used in parallel (speech, gesture, and facial recognition) and weighting …

Emotion Recognition and Depression Diagnosis by Acoustic and Visual Features: A Multimodal Approach M Sidorov, W Minker – Proceedings of the 4th International Workshop on …, 2014 – dl.acm.org … Such opportunity can be useful in various applications, eg, improvement of Spoken Dialogue Systems (SDSs) or mon- itoring agents in call-centers. Depression is another aspect of human beings which is closely related to emotions. … 85 Page 6. facial expression recognition. … Cited by 2 Related articles

[BOOK] Human Language Technology Challenges for Computer Science and Linguistics: 5th Language and Technology Conference, LTC 2011, Pozna?, Poland, … Z Vetulani, J Mariani – 2014 – books.google.com … a further contribution to the intonational modelling of backchannels in Italian, useful for improving naturalness in voice-based dialogue systems for this … Fumiyo Fukumoto, Yoshimi Suzuki, and Takeshi Yamamoto Temporal Expression Recognition Using Dependency Trees … All 2 versions

Why and How to Build Emotion-Based Agent Architectures C Lisetti, E Hudlicka – The Oxford Handbook of Affective …, 2014 – books.google.com … Understandably, FACS, EmFACS, and the corresponding CMU-Pittsburgh AU-coded face expression image database (Kanade et al, 2000) have been very instrumental to the progress of automatic facial expression recognition and analy- sis, on the one hand, and of facial … Related articles

Validating Attention Classifiers for Multi-Party Human-Robot Interaction ME Foster – workshops.acin.tuwien.ac.at … For example, Li et al. [19] estimated the attention state of users of a robot in a public space, combining person tracking, facial expression recognition, and speaking recognition. Castellano et al. … Learning to predict engagement with a spoken dialog system in open-world settings. … Related articles All 2 versions

Emotional Intelligence and Agents: Survey and Possible Applications M Ivanovi?, M Radovanovi?, Z Budimac… – Proceedings of the 4th …, 2014 – dl.acm.org … Facial recognition poses probably the most complex set of problems, but some efficient solutions do exist. … [7] F. Burkhardt, M. van Ballegooy, K.-P. Engelbrecht, T. Polzehl, and J. Stegmann. Emotion detection in dialog systems: Applications, strategies and challenges. … Related articles All 6 versions

Shut up and play: A musical approach to engagement and social presence in Human Robot Interaction L McCallum, PW McOwan – … , 2014 RO-MAN: The 23rd IEEE …, 2014 – ieeexplore.ieee.org … facial expressions are often used as affect displays in musical performance [35], meaning that automatic facial recognition techniques could … http://felixsmachines.com/ [31] G. Skantze and A. Hjalmarsson, “Towards incremental speech genera- tion in dialogue systems,” in Proc. … Cited by 1 Related articles

Extracting and Visualizing Research Impact Semantics M Wallace – Semantic and Social Media Adaptation and …, 2014 – ieeexplore.ieee.org … Dimensional emotion representation as a basis for speech synthesis with non-extreme emotions, Affective dialogue systems, Springer, 2004. … [28] S. Ioannou, G. Caridakis, K. Karpouzis, S. Kollias, Robust Feature Detection for Facial Expression Recognition, EURASIP Journal …

Emotion classification in Parkinson’s disease by higher-order spectra and power spectrum features using EEG signals: A comparative study R Yuvaraj, M Murugappan, NM Ibrahim… – Journal of integrative …, 2014 – World Scientific Page 1. Emotion classi?cation in Parkinson’s disease by higher-order spectra and power spectrum features using EEG signals: A comparative study R. Yuvaraj* ,§, M. Murugappan*, Norlinah Mohamed Ibrahim†, Mohd Iqbal … Cited by 4 Related articles All 5 versions

A statistical parametric approach to video-realistic text-driven talking avatar L Xie, N Sun, B Fan – Multimedia Tools and Applications, 2014 – Springer … speech-driven or performance-driven. The choice of the input method depends largely on the application. A dialogue system usually uses a text-driven approach since the spoken text is known. For example, AT&T Bell labs developed … Cited by 7 Related articles All 2 versions

Inferring depression and affect from application dependent meta knowledge M Kächele, M Schels, F Schwenker – Proceedings of the 4th International …, 2014 – dl.acm.org … One general approach is to in- struct a test subject to solve a specific task using a computer. An Example for this kind of data collection is the EmoRec II corpus, where a subject is playing multiple rounds of a card game using a voice controlled dialog system [59]. … Cited by 5 Related articles All 2 versions

Natural Communication about Uncertainties in Situated Interaction T Pejsa, D Bohus, MF Cohen, CW Saw… – Proceedings of the 16th …, 2014 – dl.acm.org … For example, dialog systems reason about uncertainties that arise in recognizing natural language via measures of recognition confidence and may … intention level, eg, the system might incorrectly infer the user’s identity on the basis of inaccurate facial recognition and speech … Related articles All 9 versions

Virtual assistive companions for older adults: qualitative field study and design implications C Tsiourti, E Joly, C Wings, MB Moussa… – Proceedings of the 8th …, 2014 – dl.acm.org Page 1. Virtual Assistive Companions for Older Adults: Qualitative Field Study and Design Implications ABSTRACT This paper presents a qualitative study conducted to explore perceptions, attitudes and expectations for a virtual … Related articles All 2 versions

Grounding emotion appraisal in autonomous humanoids K Kiryazov – 2014 – diva-portal.org Page 1. Linköping Studies in Science and Technology, Thesis No. 1657 Licentiate Thesis Grounding Emotion Appraisal in Autonomous Humanoids by Kiril Kiryazov Department of Computer and Information Science Linköping University SE-581 83 Linköping, Sweden … Related articles All 6 versions

An Event Driven Fusion Approach for Enjoyment Recognition in Real-time F Lingenfelser, J Wagner, E André… – Proceedings of the …, 2014 – dl.acm.org Page 1. An Event Driven Fusion Approach for Enjoyment Recognition in Real-time Florian Lingenfelser Human Centered Multimedia University of Augsburg lingenfelser@hcm-lab.de Johannes Wagner Human Centered Multimedia University of Augsburg wagner@hcm-lab.de … Related articles

Pattern Recognition for Biometrics and Bioinformatics KL Du, MNS Swamy – Neural Networks and Statistical Learning, 2014 – Springer … One of the key tasks of spoken-dialog systems is classification. Gait is an efficient biometric feature for human identification at a distance. … Research efforts in face processing are in face detection, face recognition, face tracking, pose estimation, and expression recognition. … Related articles

Using Educational DVDs to Enhance Preschoolers’ STEM Education PBL Beaudoin-Ryan – web5.soc.northwestern.edu Page 1. A Collaborative Project Funded by the National Science Foundation (Grant No. DRL-1252146) Northwestern University University of California at Riverside Georgetown University Prepared By Leanne Beaudoin-Ryan … Related articles All 3 versions

A Survey on Large Scale Corpora and Emotion Corpora M Ptaszynski, R Rzepka, S Oyama, M Kurihara… – Information and Media …, 2014 – jlc.jst.go.jp … are crucial for training many AI applica- tions, from part-of-speech taggers and dependency parsers to dialog systems or sentiment … Firstly, seed words were selected for six emotion classes bor- rowed from Ekman’s standard for basic emotions in facial expression recognition. … Related articles All 4 versions

Speech emotion recognition using amplitude modulation parameters and a combined feature selection procedure A Mencattini, E Martinelli, G Costantini… – Knowledge-Based …, 2014 – Elsevier Speech emotion recognition (SER) is a challenging framework in demanding human machine interaction systems. Standard approaches based on the categorical model o. Cited by 2 Related articles All 3 versions

FEEL: A system for frequent event and electrodermal activity labeling Y Ayzenberg, RW Picard – Biomedical and Health Informatics, …, 2014 – ieeexplore.ieee.org Page 1. Copyright (c) 2013 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. … Cited by 4 Related articles All 4 versions

Related References C Adam, B Gaudou, E Lorini, IN Chong… – Computer Vision and …, 2014 – books.google.com … Hershey, PA: Engineering Science Reference. Dornaika, F., Dornaika, F., Raducanu, B., & Raducanu, B.(2011). Subtle Facial Expression Recognition in Still Images and Videos. In Y. Zhang (Ed.), Advances in Face Image Analysis: Techniques and Technologies (pp. 259–278). … Related articles

Feel and sense the product: experimental based optimization methodology E GATTI – 2014 – politesi.polimi.it Page 1. Feel And Sense The Product: Experimental based optimization methodology by Elia Gatti PhD Candidate 26° Ciclo Tutor: Monica Bordegoni Page 2. 1 TABLE OF CONTENTS INTRODUCTION |ONE: …. … All 2 versions

Attentional Mechanisms for Socially Interactive Robots–A Survey JF Ferreira, J Dias – Autonomous Mental Development, IEEE …, 2014 – ieeexplore.ieee.org … Ten years later, Boucenna, Gaussier, and Hafemeister [89] followed up by designing a system mod- elling joint attention for social referencing by using gaze following, facial expression recognition and motion detection mechanisms (the first two learnt autonomously) to … Cited by 3 Related articles All 2 versions

Machine Learning for Social Multiparty Human–Robot Interaction S Keizer, M Ellen Foster, Z Wang… – ACM Transactions on …, 2014 – dl.acm.org … For example, Li et al. [2012] estimated the attention state of users of a robot in a public space, combining person tracking, facial expression recognition, and 2This section is adapted from Foster et al. [2013]. ACM Transactions on Interactive Intelligent Systems, Vol. 4, No. … Cited by 3 Related articles

EyeBit: Eye-Tracking Approach for Enforcing Phishing Prevention Habits D Miyamoto, T Iimura, G Blanc, H Tazaki… – necoma-project.eu Page 1. EyeBit: Eye-Tracking Approach for Enforcing Phishing Prevention Habits Daisuke Miyamoto ? , Takuji Iimura ? , Gregory Blanc † , Hajime Tazaki ? , Youki Kadobayashi ‡ ? Information Technology Center The University … Related articles