Notes:
This text discusses the use of computational models for virtual humans or agents to be able to interpret and respond to the feedback and behaviors of their human interlocutors in natural and adaptable ways. The text also mentions the use of cognitive architectures and the importance of simulating both cognitive and embodied aspects of human behavior in virtual humans. The text also touches on the use of virtual humans in various applications, such as healthcare support, social interaction, and storytelling, and the challenges of creating realistic and diverse behaviors for virtual humans. The text also mentions the use of imitation learning and multimodal listener behaviors in spoken dialogue to enable virtual humans to communicate effectively and recognize engagement.
Imitation learning is a machine learning technique in which a system or agent learns to perform a task by observing and copying the actions of a human or another agent. In the context of virtual humans, imitation learning can be used to enable virtual humans to perform tasks or actions by observing and copying the actions of a human user. For example, a virtual human could use imitation learning to learn how to perform tasks or actions in a virtual environment based on observing and copying the actions of a human user in that environment. This could be used to allow virtual humans to interact more naturally and realistically with human users in virtual environments, or to enable virtual humans to perform tasks or actions that require precise physical movements or coordination.
Multimodal listener behaviors in spoken dialogue refer to the various nonverbal behaviors that a listener exhibits during a conversation, such as facial expressions, gestures, and body posture. These behaviors can provide important cues about the listener’s attention, engagement, and emotional state, and can influence the way that the speaker communicates and the overall flow of the conversation. In the context of virtual humans, multimodal listener behaviors can be used to improve the realism and effectiveness of virtual human agents in spoken dialogue systems by enabling them to more accurately interpret and respond to the listener’s nonverbal cues.
- Behavior generation module: A module within a system or program that is responsible for generating or synthesizing specific behaviors or actions for an agent or system.
- Behavior generator: A tool or system that is used to generate or synthesize specific behaviors or actions for an agent or system.
- Behavior planner: A tool or system that is used to plan or determine the sequence of actions or behaviors that an agent or system should take in order to achieve a specific goal or objective.
References:
- Conversational Informatics: A Data-Intensive Approach with Emphasis on Nonverbal Communication (2014)
See also:
Behavior Realizers | Procedural Generation & Natural Language Processing | Realizers In Natural Language Processing | SmartBody | SSML (Speech Synthesis Markup Language) & Dialog Systems
Virtual humans in cultural heritage ICT applications: A review
OM Machidon, M Duguleana, M Carrozzino – Journal of Cultural Heritage, 2018 – Elsevier
… This approach, while ensuring a clearer communication, limits the interaction of the virtual human with only one visitor at a time … Events are then used by the behaviour generation module, in charge of all of the VH’s manifestations, including speech (by synthesis of prosodic …
Benchmark framework for virtual students’ behaviours
JL Lugrin, F Charles, M Habel, J Matthews… – Proceedings of the 17th …, 2018 – dl.acm.org
… KEYWORDS Virtual Training, Agents Behaviour Generation, Benchmarking ACM Reference Format: Jean-Luc Lugrin, Fred Charles, Michael Habel … audience [6]. However, creating realistic and diverse atmospheres from the por- trayal of virtual humans’ behaviours presents …
Results of the first annual human-agent league of the automated negotiating agents competition
J Mell, J Gratch, T Baarslag, R Aydo?ran… – Proceedings of the 18th …, 2018 – dl.acm.org
… The design of negotiating agents presents problems in strategy, opponent modelling, preference elicitation, rapport-building, natural language generation/understanding, non-verbal behavior generation, use of emotional … Negotiation as a challenge problem for virtual humans …
Door and doorway etiquette for virtual humans
W Huang, D Terzopoulos – IEEE transactions on visualization …, 2018 – ieeexplore.ieee.org
… HUANG AND TERZOPOULOS: DOOR AND DOORWAY ETIQUETTE FOR VIRTUAL HUMANS 3 … Section 5 presents the decision model and constituent factors that determine behavior generation in our autonomous pedestrians. Section 6 presents our ex- periments and results …
Hebbian plasticity in cpg controllers facilitates self-synchronization for human-robot handshaking
M Jouaiti, L Caron, P Hénaff – Frontiers in neurorobotics, 2018 – frontiersin.org
It is well-known that human social interactions generate synchrony phenomena which are often unconscious. If the interaction between individuals is based on rhythmic movements, synchronized and coordinated movements will emerge from the social synchrony.This paper proposes …
Data Driven Non-Verbal Behavior Generation for Humanoid Robots
T Kucherenko – Proceedings of the 2018 on International Conference …, 2018 – dl.acm.org
… 2018. Data Driven Non-Verbal Behavior Generation, for Humanoid Robots. In … system. The mapping will first be used to generate plausible upper-body motion for a virtual human, which will then be re-targeted to a humanoid robot …
A novel realizer of conversational behavior for affective and personalized human machine interaction-EVA U-Realizer
I Mlakar, Z Ka?i?, M Borko, M Rojc – WSEAS Trans. Environ. Dev, 2018 – researchgate.net
… game engine6. Further, Elckerlyc [19] is a BML realizer for generating multimodal verbal and nonverbal behaviour for Virtual Humans (VHs … offers a high degree of animation control via the EMBRScript language that is used as interlink between the behaviour generator, and the …
One-shot learning of human–robot handovers with triadic interaction meshes
D Vogt, S Stepputtis, B Jung, HB Amor – Autonomous Robots, 2018 – Springer
We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human users. Instead of hand-coding interaction parameters, we extract relevant…
A Model for Eye and Head Motion for Virtual Agents
J Krejsa, B Kerou, F Liarokapis – 2018 10th International …, 2018 – ieeexplore.ieee.org
… “A review of eye gaze in virtual agents, social robotics and hci: Behaviour generation, user interaction and perception,” Computer Graphics Forum, 34: pp. 299–326, 2015 … “All Together Now: Introducing the Virtual Human Toolkit,” In 13th International Conference on Intelligent …
The importance of regulatory fit & early success in a human-machine game
E Pincus, S Lei, G Lucas, E Johnson, M Tsang… – Proceedings of the …, 2018 – dl.acm.org
… 6 CONCLUSION We explore whether knowledge of a player’s chronic RF can be lever- aged by an automatic virtual human in a game to create regulatory- fit effects … 2006. Nonverbal behavior generator for embodied conversational agents. In Intelligent virtual agents (IVA) …
Field Trial Analysis of Socially Aware Robot Assistant
F Pecune, J Chen, Y Matsuyama… – Proceedings of the 17th …, 2018 – dl.acm.org
… Given the system’s task and social intentions decided by the Task and Social Reasoners, a Natural Language Generator (NLG) and Nonverbal Behavior Generator interpreted these intentions into a sentence and nonverbal behavior plans rendered on SARA’s character …
Communicative Listener Feedback in Human-Agent Interaction: Artificial Speakers Need to Be Attentive and Adaptive
H Buschmeier, S Kopp – … of the 17th International Conference on …, 2018 – dl.acm.org
… computational models for an ‘attentive speaker’ agent is able to (1) interpret the feedback behaviour of its human interlocutors by probabilistically attributing listening- related mental states to them; (2) incrementally adapt its ongoing language and behaviour generation to their …
A survey of cognitive architectures in the past 20 years
P Ye, T Wang, FY Wang – IEEE transactions on cybernetics, 2018 – ieeexplore.ieee.org
… 4-D Real-Time Control System (4D/RCS): The 4D/RCS architecture is developed since the 1980s for the military unmanned vehicles [5], [6]. It is a three-tiered structure. The top behavior generation layer converts original missions into concrete actions …
Conversational Assistants for Elderly Users–The Importance of Socially Cooperative Dialogue
S Kopp, M Brandt, H Buschmeier, K Cyra… – … Agents in Home and …, 2018 – pub.uni-bielefeld.de
… Tobii 4C Eyetracking Head-gesture recognition Filled pause detection Dialog management Behavior generation Natural language understanding Nuance Dragon ASR … 2017. Pragmatic multimodality: Effects of nonverbal cues of focus and certainty in a virtual human …
Automating the production of communicative gestures in embodied characters
B Ravenet, C Pelachaud, C Clavel… – Frontiers in …, 2018 – ncbi.nlm.nih.gov
Directing Virtual Humans Using Play-Scripts and Spatio-Temporal Reasoning
C Talbot – 2018 – researchgate.net
Page 1. DIRECTING VIRTUAL HUMANS USING PLAY-SCRIPTS AND SPATIO-TEMPORAL REASONING by Christine Talbot … Page 3. iii ABSTRACT CHRISTINE TALBOT. Directing Virtual Humans Using Play-Scripts and Spatio-temporal Reasoning. (Under the direction of DR …
Increasing the feeling of social presence by incorporating realistic interactions in multi-party VR
W Hai, N Jain, A Wydra, NM Thalmann… – Proceedings of the 31st …, 2018 – dl.acm.org
… 2003. Bottom-up visual attention for virtual human animation. In Computer Animation and Social Agents, 2003 … 2015. A review of eye gaze in virtual agents, social robotics and hci: Behaviour generation, user interaction and perception. In Computer Graphics Forum, Vol. 34 …
Flipper 2.0: a pragmatic dialogue engine for embodied conversational agents
J van Waterschoot, M Bruijnes, J Flokstra… – Proceedings of the 18th …, 2018 – dl.acm.org
… This further extends the capabili- ties and flexibility of Flipper. Examples of useful Java modules are the StanfordCoreNLP for natural language understanding [16] and BML translators such as ASAP for behaviour generation [26] …
Controlling synthetic characters in simulations: A case for cognitive architectures and Sigma
V Ustun, PS Rosenbloom, S Sajjadi, J Nuttall… – Proceedings of I/ITSEC …, 2018 – bcf.usc.edu
… for utilizing cognitive architectures in cognitive behavior model development rather than devising ad-hoc models with narrow scope for behavior generation … responding to the events around them; (3) interact in a natural way with both real and other virtual humans using verbal …
NADiA-Towards Neural Network Driven Virtual Human Conversation Agents
J Wu, S Ghosh, M Chollet, S Ly, S Mozgai… – Proceedings of the 17th …, 2018 – dl.acm.org
… Smartbody Mobile (SB Mobile) provides a lightweight platform specifically for devel- oping conversational virtual humans. SB Mobile is easily imported into a standard Android application. The behavior generation com- mands for NADiA are Smartbody scripts written in Python …
HVUAN–A Rapid-Development Framework for Spanish-Speaking Virtual Humans
D Herrera, L Herrera, Y Velandia – … on Practical Applications of Agents and …, 2018 – Springer
… recognizer. Keywords. Virtual human system Rapid development frameworks Embodied conversational agent Nonverbal behavior generator Gesture Dialogue manager. Download conference paper PDF. 1 Introduction. Embodied …
Investigating the use of recurrent motion modelling for speech gesture generation.
Y Ferstl, R McDonnell – IVA, 2018 – scss.tcd.ie
… KEYWORDS character animation, motion synthesis, behavior generation, recur- rent networks, deep learning ACM Reference Format: Ylva Ferstl and … 1 INTRODUCTION Virtual humans are becoming more and more popular for many applications, such as video games, human …
Accuracy of Perceiving Precisely Gazing Virtual Agents.
S Loth, G Horstmann, C Osterbrink, S Kopp – IVA, 2018 – uni-bielefeld.de
… 2010. Individualized Gesturing Outperforms Average Gesturing – Evaluating Gesture Production in Virtual Humans … 2015. A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception: A Review of Eye Gaze …
Communicative Listener Feedback in Human–Agent Interaction: Artificial Speakers Need to Be Attentive and Adaptive
SIA Track, H Buschmeier, S Kopp – techfak.uni-bielefeld.de
… models for an ‘attentive speaker’ agent that is able to (1) interpret the feedback behaviour of its human interlocutors by probabilistically attributing listening- related mental states to them; (2) incrementally adapt its ongoing language and behaviour generation to their needs; and …
An embodied virtual agent platform for emotional Stroop effect experiments: A proof of concept
A Oker, N Glas, F Pecune, C Pelachaud – Biologically inspired cognitive …, 2018 – Elsevier
… According to Gratch, “virtual humans aspire to simulate the cognitive abilities of people, but also many of the “embodied” aspects of human behavior, more traditionally studied in fields … SAIBA is a multimodal behavior generation framework for embodied conversational agents …
Real time simulation of virtual human crowd planning
R CHIGHOUB – 2018 – thesis.univ-biskra.dz
… d’humains virtuels Real time simulation of virtual human crowd planning Présentée par : CHIGHOUB Rabiaa … Abstract Behavioral simulation consists to simulate and animate virtual environments populated by virtual humans, and focuses both on local and global realism …
An Eye Gaze Model for Controlling the Display of Social Status in Believable Virtual Humans
M Nixon, S DiPaola, U Bernardet – 2018 IEEE Conference on …, 2018 – ieeexplore.ieee.org
Page 1. An Eye Gaze Model for Controlling the Display of Social Status in Believable Virtual Humans Michael … In this paper, we investigate the communication of status related social signals by means of a virtual human’s eye gaze. We …
Deployment of Synthetic Emotions in Robotic Mechanism: A Systematic Approach
S Patil, P Wararkar, R Bhat, S Kurumbanshi – ijcem.org
… Similarly virtual humans will meet ways to express emotional behaviour when they face conditions such as social interaction, cooperation and learning … 4 The three stages of behavior generation in the SAIBA framework and the two mediating languages FML and BML [21][24] A …
Technological speculations for african oral storytelling: implication of creating expressive embodied conversational agents
MA Allela – Proceedings of the Second African Conference for …, 2018 – dl.acm.org
… on developing a model for designing expressive performance- focused and story-driven interaction using Virtual Humans (VHs) within … requires a combination of technologies ranging from speech recognition, voice synthesis, non-verbal behavior generation, natural language …
of deliverable Requirements and Concepts for Interaction Mobile and Web
C Pelachaud, RB Kantharaju – 2018 – council-of-coaches.eu
… It can be a virtual human avatar or software-based … This work will make use of and build upon the GRETA/VIB platform developed at UPMC (Pecune, Cafaro, Chollet, Philippe, & Pelachaud, 2014) for multimodal behaviour generation and for visualising Embodied Conversational …
Analysis of the Effect of Agent’s Embodiment and Gaze Amount on Personality Perception
T Koda, T Ishioh – Proceedings of the 4th International Workshop on …, 2018 – dl.acm.org
… The rickel gaze model: A window on the mind of a virtual human. In Proceedings of Intelligent Virtual Agents, Springer (2007), 296-303 … 2015. A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception …
Towards Developing a Model to Handle Multiparty Conversations for Healthcare Agents.
RB Kantharaju, C Pelachaud – ICAHGCA@ AAMAS, 2018 – council-of-coaches.eu
… This work will make use of and build upon the GRETA/VIB platform [25] for mul- timodal behaviour generation and for visualizing … In [9], a virtual human was developed for conducting interviews for healthcare support and it was shown that participants reported willingness to …
Biologically Inspired Cognitive Architectures
A Oker, N Glas, F Pecune, C Pelachaud – researchgate.net
… According to Gratch, “virtual humans aspire to simulate the cognitive abilities of people, but also many of the “embodied” aspects of human behavior, more traditionally studied in fields … SAIBA is a multimodal behavior generation framework for em- bodied conversational agents …
A Text to Animation System for Physical Exercises
H Sarma, R Porzel, JD Smeddinck… – The Computer …, 2018 – academic.oup.com
Abstract. Enabling multiple-purpose robots to follow textual instructions is an important challenge on the path to automating skill acquisition. In order to co.
Perception of Culture-specific Gaze Behaviors of Agents and Gender Effects
T Koda, Y Takeda – Proceedings of the 6th International Conference on …, 2018 – dl.acm.org
… [4] Lee, J., Marsella, S., Traum, D., Gratch, J., and Lance, B. The rickel gaze model: A window on the mind of a virtual human. In IVA2007, Springer, pp … A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception …
A Flexible Scheme to Model the Cognitive Influence on Emotions in Autonomous Agents
S Castellanos, LF Rodríguez – International Journal of Cognitive …, 2018 – igi-global.com
… withtheabilitytorecognizeemotionsofhumansandothervirtualagents. Forexample,AlmaisaCME designedtoprovidevirtualhumanswithemotions … phaseof theoperatingcycleofCMEssincethe consistencyoftheresultsofotherphases(eg,emotionand behaviorgeneration)dependon …
What Gaze Tells Us About Personality
T Ait Challal, O Grynszpan – … of the 6th International Conference on …, 2018 – dl.acm.org
… Twenty four participants were placed face-to-face with a virtual human, alternately female or male, which they had to address verbally … In fact, a computer program enabled the virtual human to react to the gaze of the participants that was detected via an eye-tracker …
Toward Modeling Emotional Crowds
L Lyu, J Zhang – IEEE Access, 2018 – ieeexplore.ieee.org
… III. OVERVIEW In this paper, we present an Emotion Network structure for crowd emotion modeling and emotion-oriented behavior generation. The architecture of the Emotion Network is inspired by social network theory [51] and intergroup emotion theory [52] …
Engagement recognition by a latent character model based on multimodal listener behaviors in spoken dialogue
K Inoue, D Lala, K Takanashi… – APSIPA Transactions on …, 2018 – cambridge.org
… Our work is novel, in that the difference in annotation form the basis of our engagement recognition model. C) Adaptive behavior generation according to engagement Some attempts were made to generate system behaviors after recognizing user engagement …
Interactive Motion Planning for Multi-Agent Systems with Physics-Based and Behavior Constraints
A Best – 2018 – search.proquest.com
Interactive Motion Planning for Multi-Agent Systems with Physics-Based and Behavior Constraints. Abstract. Man-made entities and humans rely on movement as an essential form of interaction with the world. Whether it is an …
Learning socio-communicative behaviors of a humanoid robot by demonstration
DC Nguyen – 2018 – hal.archives-ouvertes.fr
Page 1. HAL Id: tel-01962544 https://hal.archives-ouvertes.fr/tel-01962544 Submitted on 20 Dec 2018 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not …
Designing Joint Attention Systems for robots that assist children with autism spectrum disorders
L Fermoselle – 2018 – diva-portal.org
Page 1. IN DEGREE PROJECT COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS , STOCKHOLM SWEDEN 2018 Designing Joint Attention Systems for robots that assist children with autism spectrum disorders LEONOR FERMOSELLE …