SAL (Sensitive Artificial Listener) 2015


Notes:

  • Emotion modeling
  • Social embodiment
  • Virtual listener

Resources:

Wikipedia:

See also:

SAL (Sensitive Artificial Listener) 2014


Towards a standard set of acoustic features for the processing of emotion in speech. F Eyben, A Batliner, B Schuller – Proceedings of Meetings on …, 2015 – scitation.aip.org … Speech Under Simulated and Actual Stress (SUSAS, [7]) database), to more naturalistic corpora such as the Audiovisual Interest Corpus (AV IC, [8]), the Sensitive Artificial Listener (SAL, [12]), and the Vera-Am-Mittag (VAM, Eyben et al. … Cited by 9 Related articles

Autonomous agents and avatars in REVERIE’s virtual environment F Kuijk, KC Apostolakis, P Daras, B Ravenet… – Proceedings of the 20th …, 2015 – dl.acm.org … That system is focused on facial expressions and gestures of the upper body. The interaction it supports is interaction between a human in the real world and a single virtual character, a Sensitive Artificial Listener, in a separated virtual world. … Cited by 5 Related articles All 7 versions

BAUM-2: a multilingual audio-visual affective face database CE Erdem, C Turan, Z Aydin – Multimedia Tools and Applications, 2015 – Springer … Collecting databases which contain spontaneous or naturalistic expressions is very difficult, time consuming, and labor intensive. The SEMAINE [35] database contains naturalistic expressions of emotions from 150 subjects who were interacting with a sensitive artificial listener. … Cited by 6 Related articles All 5 versions

Automatic facial expression analysis M Valstar – Understanding Facial Expressions in Communication, 2015 – Springer … The sensitive artificial listener: an induction technique for generating emotionally coloured conversation (pp. 1–4). In LREC Workshop on Corpora for Research on Emotion and Affect. Ekman, P., Friesen, WV, & Hager, JC (2002). FACS manual. Salt Lake City: Research Nexus. … Cited by 7 Related articles All 3 versions

“MACH: My Automated Conversation coach” A review M Barzallo – academia.edu … interviews. They mention that technologies such as Sensitive Artificial Listener (SAC) (Schroder et al., 2012 and Rapport Agent (Gratch et al., 2007) do not focus on real task completion or give affective formative feedback. However … Related articles

Emotion recognition from speech signal using mel-frequency cepstral coefficients OE Korkmaz, A Atasoy – 2015 9th International Conference on …, 2015 – ieeexplore.ieee.org … The other databases in literature are Berlin emotional speech database [8], Danish emotional speech database [9], Sensitive Artificial Listener (SAL) [10], Airplane Behaviour Corpus (ABC) [11], Speech Under Simulated and Actual Stress (SUSAS) [12], Audiovisual Interest … Related articles

Can a Virtual Listener Replace a Human Listener in Active Listening Conversation? HH Huang, N Konishi, S Shibusawa… – Proceedings of the …, 2015 – dl.acm.org … mood”. For example, when the user looks in bad mood, showing the agent’s concern on the user by saying “Are you OK?” like human do. The SEMAINE project [8, 11] was launched to build a Sensitive Artificial Listener (SAL). SAL … Related articles

Construction and analysis of social-affective interaction corpus in English and Indonesian N Lubis, S Sakti, G Neubig, T Toda… – … COCOSDA held jointly …, 2015 – ieeexplore.ieee.org … The SEMAINE Database consists of natural dialogue be- tween user and operator simulating a Sensitive Artificial Listener (SAL) [8] with different traits. These different char- acteristics of the SALs elicit different emotions from the user, thus resulting in emotionally colorful data. … Cited by 1 Related articles

A Revisit to the Incorporation of Context-awareness in Affective Computing Systems A Vlachostergiou, S Kollias – ceur-ws.org … different rooms. The emotions were elicited with the sensitive artificial listener (SAL) framework, where the operator assumes four personalities aiming to elicit pos- itive and negative emotional reactions from the user. Agent’s … Related articles All 2 versions

Fera 2015-second facial expression recognition and analysis challenge MF Valstar, T Almaev, JM Girard… – Automatic Face and …, 2015 – ieeexplore.ieee.org … The scenario used in the recordings is called the Sensitive Artificial Listener (SAL) technique [4]. It involves a user interacting with emotionally stereotyped “characters” whose responses are stock phrases keyed to the user’s emotional state rather than the content of what (s)he … Cited by 46 Related articles All 7 versions

Supervised domain adaptation for emotion recognition from speech M Abdelwahab, C Busso – 2015 IEEE International Conference …, 2015 – ieeexplore.ieee.org … The emotions were elicited with the sensitive artificial listener (SAL) framework, where the operator assumes four personalities aiming to elicit positive and negative emotional reactions from the user. The sessions were emotionally annotated by 6-8 raters. … Cited by 7 Related articles All 5 versions

Assessing speaker independence on a speech-based depression level estimation system P Lopez-Otero, L Docio-Fernandez… – Pattern Recognition …, 2015 – Elsevier … Table 3: • SEMAINE database. Consisting of a large database of conversations between a human user and a sensitive artificial listener (SAL), this database contains several hours of emotionally colored speech [21]. The SOLID … Related articles All 3 versions

Prediction-based Audiovisual Fusion for Classification of Non-linguistic Vocalisations S Petridis, M Pantic, O Rudovic, M Pantic, I Patras… – 2015}, 2015 – ibug.doc.ic.ac.uk … Computer Vision (ECCV?10) %D 2010 %8 September %C Heraklion, Crete, Greece %F Rudovic2010cgprf %P 350-363. Dimensional Emotion Recognition from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listener. H. Gunes, M. Pantic. … Cited by 1 Related articles All 6 versions

Play smile game with erisa: A user study on game companions A Chowanda, P Blanchfield… – … on Engagement in …, 2015 – eprints.nottingham.ac.uk … The SEMAINE project team created a large number of audiovisual database of interactions with their Embodied Conversational Agent (ECA) using WoZ method, semi-automatic Sensitive Artificial Listener (SAL) and automatic SAL [11]. … Cited by 2 Related articles All 3 versions

Social behaviour in police interviews: relating data to theories M Bruijnes, J Linssen, R op den Akker… – Conflict and Multimodal …, 2015 – Springer Cited by 10 Related articles All 10 versions

Emotion recognition using synthetic speech as neutral reference R Lotfian, C Busso – 2015 IEEE International Conference on …, 2015 – ieeexplore.ieee.org … Using the sensitive artificial listener (SAL) framework, the operator plays different characters with specific personalities, inducing emotional reactions on the user. While the corpus provides audiovisual record- ings, this study only considers the audio from ten users. … Cited by 2 Related articles All 3 versions

A survey on human emotion recognition approaches, databases and applications C Vinola, K Vimaladevi – … Letters on Computer Vision and Image …, 2015 – elcvia.cvc.uab.es Page 1. Correspondence to: selvivino@gmail.com Recommended for acceptance by Angel Sappa http://dx.doi.org/10.5565/rev/elcvia.795 ELCVIA ISSN: 1577-5097 Published by Computer Vision Center / Universitat Autonoma de Barcelona, Barcelona, Spain … Cited by 1 Related articles All 5 versions

Dialog System For Human-Robot Communication (Bachelor Thesis) M Birger – 2015 – dspace.vutbr.cz … Based on user responses, vast majority of the system users are satisfied with the system performance [6]. The Semaine project is an EU-FP7 1st call STREP project and aims to build a SAL, a Sensitive Artificial Listener [20]. This multimodal dialogue system can: … Related articles

Human Affect Recognition: Audio?Based Methods B Schuller, F Weninger – Wiley Encyclopedia of Electrical and …, 2015 – Wiley Online Library … assigned to discrete levels of interest. The Belfast Sensitive Artificial Listener (SAL) data (22) consist of conversations between humans and a virtual emotional agent in a Wizard-of-Oz scenario. Unlike the other examples, these … Cited by 1 Related articles

Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data F Ringeval, F Eyben, E Kroupi, A Yuce… – Pattern Recognition …, 2015 – Elsevier … prediction for the utterance. Recently, databases with time-continuous ratings have emerged such as the Sensitive Artificial Listener (SAL) set in the HUMAINE database [9], and the SEMAINE database [10]. Such databases have … Cited by 38 Related articles All 8 versions

Remotely Human M Raman – Communication and Control: Tools, Systems, and …, 2015 – books.google.com … The latest work pub- lished as part of the SEMAINE Sensitive Artificial Listener (SAL) project describes SAL’s practical ability to recognize and react to suggestions, opin- ions, request for information, and displays of solidarity, antagonism, and tension. … Related articles

Towards the generation of dialogue acts in socio-affective ECAs: a corpus-based prosodic analysis R Bawden, C Clavel, F Landragin – Language Resources and Evaluation, 2015 – Springer … operator and a user. The chosen scenario was that of the Solid Sensitive Artificial Listener, in which dialogue between a person playing the role of the operator and a user was recorded visually and auditorily. Dialogue was non … Cited by 1 Related articles

Exploring dataset similarities using PCA-based feature selection I Siegert, R Böck, A Wendemuth… – … Interaction (ACII), 2015 …, 2015 – ieeexplore.ieee.org … commercial presentation. The Belfast Sensitive Artificial Listener (SAL), used eg in [15], contains 25 audio-visual recordings in total from 4 speakers (2 female) with an average length of 20 minutes per subject. The depicted … Cited by 1 Related articles All 8 versions

The recognition of acted interpersonal stance in police interrogations and the influence of actor proficiency M Bruijnes, R op den Akker, S Spitters… – Journal on multimodal …, 2015 – Springer Page 1. J Multimodal User Interfaces (2015) 9:353–376 DOI 10.1007/s12193-015- 0189-0 ORIGINAL PAPER The recognition of acted interpersonal stance in police interrogations and the influence of actor proficiency Merijn … Related articles All 7 versions

A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception K Ruhland, CE Peters, S Andrist… – Computer Graphics …, 2015 – Wiley Online Library Our site uses cookies to improve your experience. You can find out more about our use of cookies in About Cookies, including instructions on how to turn off cookies if you wish to do so. By continuing to browse this site you agree … Cited by 7 Related articles All 9 versions

Correcting time-continuous emotional labels by modeling the reaction lag of evaluators S Mariooryad, C Busso – IEEE Transactions on Affective …, 2015 – ieeexplore.ieee.org Page 1. Correcting Time-Continuous Emotional Labels by Modeling the Reaction Lag of Evaluators Soroosh Mariooryad, Student Member, IEEE and Carlos Busso, Senior Member, IEEE Abstract—An appealing scheme to characterize … Cited by 18 Related articles All 3 versions

Comparison of Single-model and Multiple-model Prediction-based Audiovisual Fusion S Petridis, V Rajgarhia, M Pantic – 2015 – doc.utwente.nl … 137–140. [14] E. Douglas-Cowie, R. Cowie, C. Cox, N. Amir, and D. Heylen, “The Sensitive Artificial Listener: an induction technique for gen- erating emotionally coloured conversation,” in Programme of the Workshop on Corpora for Research on Emotion and Affect, 2008, pp. … Related articles All 7 versions

A Multi-task Learning Framework for Emotion Recognition Using 2D Continuous Space R Xia, Y Liu – ieeexplore.ieee.org … The class distribution is: 20.0% angry, 19.6% sad, 29.6% happy, and 30.8% neutral. 4.2 SEMAINE This corpus is developed under a scenario called ‘Sensitive Artificial Listener’ (SAL) [32].2 This scenario is designed to generate a conversation between a user and an operator. … Cited by 2 Related articles All 2 versions

Facial Expression Recognition in the Presence of Speech using Blind Lexical Compensation S Mariooryad, C Busso – 2015 – ieeexplore.ieee.org … The operator plays the role of sensitive artificial listener (SAL) agents [39] with different personalities (happy, gloomy, angry and pragmatic) to evoke emotional reactions from the users. The sessions are segmented into speaking turns, which are manually transcribed. … Related articles All 2 versions

A Short Introduction to Laughter S Petridis – researchgate.net Page 1. A Short Introduction to Laughter ? Stavros Petridis Department of Computing Imperial College London London, UK 1 Production Of Laughter And Speech The human speech production system is composed of the lungs … Related articles All 3 versions

Affective Speech Recognition F Youssefi, S Pouria – 2015 – uwspace.uwaterloo.ca Page 1. Affective Speech Recognition by Seyyed Pouria Fewzee Youssefi A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy in Electrical and Computer Engineering Waterloo, Ontario, Canada, 2015 … Related articles

Situation-and user-adaptive dialogue management G Bertrand – 2015 – oparu.uni-ulm.de Page 1. Situation- and User-Adaptive Dialogue Management Gregor Bertrand geboren in Ravensburg Institute of Communications Engineering: Dialogue Systems University of Ulm Dissertation zur Erlangung des Doktorgrades Dr.rer.nat. … Related articles All 2 versions

A system for recognizing human emotions based on speech analysis and facial feature extraction: applications to Human-Robot Interaction M Rabiei – 2015 – dspace-uniud.cineca.it … to estimate the emission probabilities [51]. Wollmer et al. in 2010 designed the system for sensitive artificial listener (SAL) in human-robot communication. The proposed algorithms have been used linguistic and acoustic as well as long range contextual. … Related articles All 2 versions

The use of ensemble techniques in multiclass speech emotion recognition to improve both accuracy and confidence in classifications A Murphy – 2015 – aran.library.nuigalway.ie Page 1. Provided by the author(s) and NUI Galway in accordance with publisher policies. Please cite the published version when available. Downloaded 2016-05-20T07:24:25Z Some rights reserved. For more information, please see the item record link above. … Related articles All 2 versions

MSP-IMPROV: An acted corpus of dyadic interactions to study emotion perception C Busso, S Parthasarathy, A Burmania… – 2015 – ieeexplore.ieee.org … 3 using wizard of oz (WOZ) methods during human- machine interaction [29]–[32], requesting the subjects to recall personal emotional experiences [33], inducing emotions with sensitive artificial listener (SAL) [34], having collaborative tasks between participants [35], and … Cited by 7 Related articles All 2 versions

Emotional and user-specific cues for improved analysis of naturalistic interactions I Siegert – 2015 – d-nb.info Page 1. INSTITUT FÜR INFORMATIONS- UND KOMMUNIKATIONSTECHNIK (IIKT) Emotional and User-Specific Cues for Improved Analysis of Naturalistic Interactions DISSERTATION zur Erlangung des akademischen Grades Doktoringenieur (Dr.-Ing.) von Dipl.-Ing. … Cited by 1 Related articles

Using Social Media Networks for Measuring Consumer Confidence: Problems, Issues and Prospects JVCE Igboayaka – 2015 – ruor.uottawa.ca … There are already tools that mine datasets for indicative trends such as opinion or sentiment. For example, General Inquirer from Harvard University, SAL—Sensitive Artificial Listener, TS—Twitter Sentiment, RA—RateItAll are examples of opinion mining retrieval tools … Related articles All 4 versions