Notes:
The text discusses the use of virtual humans in various applications, including education, entertainment, disaster situations, training, and therapies. It also discusses the importance of nonverbal behavior generation in virtual humans to improve the quality of mediated interactions and the use of behavior generation algorithms to map high-level behaviors onto low-level behaviors that can be executed by robots. The text also mentions the importance of perception in virtual environments and the use of behavior realizers for virtual humans that can be applied to different robot platforms. Other topics mentioned include the use of virtual humans in virtual learning environments, multimodal speech and gesture behavior, and the integration of muscle activity in interaction motion models.
See also:
Towards adaptive social behavior generation for assistive robots using reinforcement learning
J Hemminahaus, S Kopp – 2017 12th ACM/IEEE International …, 2017 – ieeexplore.ieee.org
Page 1. Towards Adaptive Social Behavior Generation for Assistive Robots Using Reinforcement Learning … We present a multimodal behavior generation architecture that maps high-level interactional functions and behaviors onto low-level behaviors executable by a robot …
A system for learning continuous human-robot interactions from human-human demonstrations
D Vogt, S Stepputtis, S Grehl, B Jung… – … on Robotics and …, 2017 – ieeexplore.ieee.org
Page 1. A System for Learning Continuous Human-Robot Interactions from Human-Human Demonstrations David Vogt 1 , Simon Stepputtis 2 , Steve Grehl 1 , Bernhard Jung 1 , Heni Ben Amor 2 Abstract—We present a data …
A survey on the procedural generation of virtual worlds
J Freiknecht, W Effelsberg – Multimodal Technologies and Interaction, 2017 – mdpi.com
This survey presents algorithms for the automatic generation of content for virtual worlds, in particular for games. After a definition of the term procedural content generation, the algorithms to generate realistic objects such as landscapes and vegetation, road networks, buildings, living …
The TTS-driven affective embodied conversational agent EVA, based on a novel conversational-behavior generation algorithm
M Rojc, I Mlakar, Z Ka?i? – Engineering Applications of Artificial Intelligence, 2017 – Elsevier
… A similar strategy is used in systems designed for virtual human research, such as Greta (Pelachaud, 2015), Elckerlyc (Van Welbergen … The underlying gesture generation model (non-verbal behavior generator, NVBG) within these systems is mostly rule- or scenario-based …
Social eye gaze in human-robot interaction: a review
H Admoni, B Scassellati – Journal of Human-Robot Interaction, 2017 – dl.acm.org
Page 1. Social Eye Gaze in Human-Robot Interaction: A Review Henny Admoni and Brian Scassellati Department of Computer Science, Yale University This article reviews the state of the art in social eye gaze for human-robot interaction (HRI) …
A novel unity-based realizer for the realization of conversational behavior on embodied conversational agents
I Mlakar, Z Ka?i?, M Borko, M Rojc – International Journal of …, 2017 – researchgate.net
… conversational agent EVA, based on a novel conversational-behavior generation algorithm. Engineering Applications of Artificial Intelligence, 57, 80-104. [16] Gratch, J., Hartholt, A., Dehghani, M., & Marsella, S. (2013). Virtual humans: a new toolkit for cognitive science research …
Pragmatic multimodality: Effects of nonverbal cues of focus and certainty in a virtual human
F Freigang, S Klett, S Kopp – International Conference on Intelligent Virtual …, 2017 – Springer
… Lee, J., and Stacy M.: Nonverbal behavior generator for embodied conversa- tional agents, International Workshop on Intelligent Virtual … Bergmann, K., Kopp, S., Eyssel, F.: Individualized gesturing outperforms aver- age gesturing: evaluating gesture production in virtual humans …
Adaptive emotional chatting behavior to increase the sociability of robots
I Rodriguez, JM Martínez-Otzeta, E Lazkano… – … Conference on Social …, 2017 – Springer
… ACM (2001)Google Scholar. 5. Lee, J., Marsella, S.: Nonverbal behavior generator for embodied conversational agents … 11821830_20 CrossRefGoogle Scholar. 6. Lhommet, M., Xu, Y., Marsella, S.: Cerebella: automatic generation of nonverbal behavior for virtual humans …
Here’s Looking At You Anyway!: How Important is Realistic Gaze Behavior in Co-located Social Virtual Reality Games?
S Seele, S Misztal, H Buhler, R Herpers… – Proceedings of the …, 2017 – dl.acm.org
… ABSTRACT Simulating eye movements for virtual humans or avatars can improve social experiences in virtual reality (VR) games, es- pecially when wearing head mounted displays … Two examples for gaze models, well known in the virtual humans community, are Lee et al …
Developing Virtual Patients with VR/AR for a natural user interface in medical teaching
MA Zielke, D Zakhidov, G Hardee… – 2017 IEEE 5th …, 2017 – ieeexplore.ieee.org
… In the development of SimCoach and the SimSensei Kiosk, ICT utilizes a Virtual Human Toolkit (VHTK) that contains a framework that … text classification algorithm that selects the character’s responses based on the user’s utterances; the Nonverbal Behavior Generator (NVBG) a …
Listen to My Body: Does Making Friends Help Influence People?
R Artstein, D Traum, J Boberg, A Gainer… – The Thirtieth …, 2017 – aaai.org
… Julie’s voice was synthesized using a voice from NEOspeech’s text-to-speech engine. Her behaviors were generated with synchronized speech and gesture using Smartbody, the Nonverbal Behavior Generator, and TTSRe- lay, all components of the Virtual Human Toolkit …
You Can Leave Your Head On
J Linssen, M Berkhoff, M Bode, E Rens… – … on Intelligent Virtual …, 2017 – Springer
… strong conclusions, our studies suggest that the robot head can be used successfully to complement the behaviour of the virtual human, if proper … M., Mutlu, B., McDonnell, R.: A review of eye gaze in virtual agents, social robotics and HCI: Behaviour generation, user interaction …
EmotionML
F Burkhardt, C Pelachaud, BW Schuller… – Multimodal interaction with …, 2017 – Springer
… For example, as part of their Virtual Human Markup Language, the Curtin University of Technology introduced in the Interface project EML, Emotion Markup Language, a vocabulary used to control the emotional … APML, a markup language for believable behavior generation …
Selecting and expressing communicative functions in a saiba-compliant agent framework
A Cafaro, M Bruijnes, J van Waterschoot… – … on Intelligent Virtual …, 2017 – Springer
… This is typically the joint task of a Dialogue Manager (DM) on the one hand (intent planner) and a non-verbal behaviour generation (NVBG) system on the other (behaviour planner) … The Virtual Human Toolkit uses question answering algorithms to select the agent’s response …
R3D3 in the Wild: Using A Robot for Turn Management in Multi-Party Interaction with a Virtual Human
M Theune, D Wiltenburg, M Bode… – IVA Workshop on …, 2017 – research.utwente.nl
… Wiltenburg, “You can leave your head on – attention management and turn-taking in multi-party interaction with a virtual human/robot duo … Gleicher, B. Mutlu, and R. McDonnell, “A review of eye gaze in virtual agents, social robotics and HCI: Behaviour generation, user interaction …
Computational approaches to dialogue
D Traum – 2017 – books.google.com
… These systems are also called virtual humans (Gratch et al. 2002) … DeVault, D., Georgila, K., Artstein, R., Morbini, F., Traum, D., Scherer, S., Rizzo, AS & Morency, L.-P.(2013), Verbal indicators of psychological distress in interactive dialogue with a virtual human …
Using facially expressive robots to calibrate clinical pain perception
M Moosaei, SK Das, DO Popa, LD Riek – Proceedings of the 2017 ACM …, 2017 – dl.acm.org
Page 1. Using Facially Expressive Robots to Calibrate Clinical Pain Perception Maryam Moosaei1, Sumit K. Das2, Dan O. Popa2, Laurel D. Riek3 1Computer Science and Engineering, University of Notre Dame, USA 2Electrical …
A multifaceted study on eye contact based speaker identification in three-party conversations
Y Ding, Y Zhang, M Xiao, Z Deng – … of the 2017 CHI Conference on …, 2017 – dl.acm.org
… 39, 11, 6, 14]. To improve the quality of mediated interaction, tremendous progresses have been made to synthesize nonverbal behaviors for virtual humans as the speaker or a listener in a conversation. Often, the synthesized …
Towards interactive agents that infer emotions from voice and context information
D Formolo, T Bosse – 2017 – repository.ubn.ru.nl
… Furthermore, the virtual human SimSensei Kiosk uses voice, speech and other features to analyse user emotions in the context of healthcare … the Context Awareness module simply does not send any new information to the next module (the Behaviour Generation module) …
An Incremental Response Policy in an Automatic Word-Game.
E Pincus, DR Traum – WCIHAI@ IVA, 2017 – people.ict.usc.edu
… to the system’s dialogue manager which can send response behavior messages back to different components of the virtual human toolkit based … All together now Intelligent Virtual Agents (IVA) (2013) 5. Lee, Jina and Marsella, Stacy Nonverbal behavior generator for embodied …
Affect and Believability in Game Characters–A Review of The Use of Affective Computing in Games
S Hamdy, D King – GameOn’17–The Simulation and AI in Games …, 2017 – rke.abertay.ac.uk
… The Virtual Human Toolkit (Hartholt et al. 2013) is a collection of modules, tools, and libraries, integrated into a framework and open architecture for creating ECAs … It includes sensing, interpretation, behaviour generation, and game components …
A robotic couples counselor for promoting positive communication
D Utami, TW Bickmore, LJ Kruger – 2017 26th IEEE …, 2017 – ieeexplore.ieee.org
… In behavior generation, Mutlu, et al … 1) Feelings of no-judgment: Consistent with previous research on increased self-disclosure with virtual humans [Gale,Gratch, 2014], participants expressed their comfort in sharing their thoughts and feelings with the robot, because “robots don …
Affect Analysis using Multimodal Cues
S Bhatia – 2017 – fg2017.org
… sharing personal information with the virtual human and whether it would be sensitive to their non-verbal behaviour. The three stage design of SimSensei led to key functionalities such as dialogue processing, multimodal perception and non-verbal behaviour generation. IV …
The GIFT 2017 Architecture Report
K Brawner, Z Heylmun, M Hoffman – Proceedings of the 5th …, 2017 – books.google.com
… Additional services, such as those provided from the Virtual Human Toolkit (Hartholt et al., 2013) are planned to be added via this approach within the next 12 months. Services include high-quality animation of agents, non-verbal behavior generation, natural language …
20 Body Movements Generation for Virtual Characters and Social Robots
A Beck, Z Yumak… – Social Signal …, 2017 – books.google.com
… For example, multiparty interaction is an active research topic (Yumak et al., 2014). State of the art virtual humans and social robots are not yet able to display this kind offlexibility … These challenges need to be addressed not at the movement behaviour generation level alone …
Challenges in Synchronized Behavior Realization for Different Robotic Embodiments
I de Kok, J Hemminghaus, S Kopp – 2017 – pub.uni-bielefeld.de
… Here, we present work using a behavior realizer that was build for virtual humans and apply this on two robot platforms with different … Keywords: Multimodal Behavior Generation, Continuous Interaction, Human-Robot-Interaction, Social Robotics, AsapRealizer, BML …
Automatic question generation for virtual humans
EL Fasya – 2017 – essay.utwente.nl
… of spoken dialogue systems because it involves more modules such as nonverbal behavior understanding and nonverbal behavior generation. Fig. 2.3.: Virtual Human Architecture [7] Figure 2.3 shows the common architecture of a virtual human [7]. The architecture is …
Integration of Multi-modal Cues in Synthetic Attention Processes to Drive Virtual Agent Behavior
S Seele, T Haubrich, T Metzler, J Schild… – … on Intelligent Virtual …, 2017 – Springer
… Virtual human researchers are not constrained by considerations present in game development and are able to explore more realistic perception … Our current work focuses on these aspects of agent behavior generation as the basis for the subsequent action selection process …
Augmenting Cognitive Processes and Behavior of Intelligent Virtual Agents by Modeling Synthetic Perception
S Seele, T Haubrich, J Schild, R Herpers… – Proceedings of the on …, 2017 – dl.acm.org
… sensory input data. e main purpose of our work is to model the ability as well as the limitations of perception in a virtual environment and in- tegrate the ndings into a behavior generation system for virtual humans. In the work …
Generation of communicative intentions for virtual agents in an intelligent virtual environment: application to virtual learning environment
B Nakhal – 2017 – tel.archives-ouvertes.fr
… ICT Institute for Creative Technologies VH Virtual Human MARC Multimodal Affective and Reactive Character FML Function Markup Language … MURML Multimodal Utterance Representation Markup Language VHToolkit Virtual Human Toolkit UDP User Datagram Protocol …
Conversational Agent Learning Natural Gaze and Motion of Multi-Party Conversation from Example
S Zou, K Kuzushima, H Mitake… – Proceedings of the 5th …, 2017 – dl.acm.org
… The proposed method consists of 3 parts: conversation exam- ple capture, HMM learning, interactive behavior generation. Figure 1 shows the overview. Figure 2. Schema of Training Data … 2014. SimSensei Kiosk: A Virtual Human Interviewer for Healthcare Decision Support …
Computational gesture research
S Kopp – Why Gesture?: How the hands function in speaking …, 2017 – books.google.com
… & Eyssel, Friederike 2010.“Individualized gesturing outper- forms average gesturing–evaluating gesture production in virtual humans.” In Proceedings of … Lee, Jina, & Marsella, Stacy 2006.“Nonverbal behavior generator for embodied conversational agents.” In Proceedings of …
Affect and believability in game characters: a review of the use of affective computing in games
S ElSayed, DJ King – GAME-ON’2017, 18th annual Conference on …, 2017 – rke.abertay.ac.uk
… The Virtual Human Toolkit (Hartholt et al. 2013) is a collection of modules, tools, and libraries, integrated into a framework and open architecture for creating ECAs … It includes sensing, interpretation, behaviour generation, and game components …
Gaze-Related Eye and Head Motion for Virtual Agents
J Krejsa – is.muni.cz
Page 1. Masaryk University Faculty of Informatics Gaze-Related Eye and Head Motion for Virtual Agents Bachelor’s Thesis Jan Krejsa Brno, Spring 2017 Page 2. Page 3. Masaryk University Faculty of Informatics Gaze-Related Eye and Head Motion for Virtual Agents …
Territoriality and Visible Social Commitment for Virtual Agents
J Rossi, L Veroli, M Massetti – skemman.is
Page 1. Università degli Studi di Camerino Scuola di Scienze e Tecnologie Computer Science Territoriality and Visible Social Commitment for Virtual Agents Master Thesis Authors: Marco Massetti Joy Rossi Leonardo Veroli Supervisors: Prof. Emanuela Merelli Prof …
Gaze and filled pause detection for smooth human-robot conversations
M Bilac, M Chamoux, A Lim – 2017 IEEE-RAS 17th International …, 2017 – ieeexplore.ieee.org
… 2014. SimSensei Kiosk: A virtual human interviewer for healthcare decision support. In Proceedings of the 2014 International Conference on Autonomous Agents and Multia- gent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 1061–1068 …
Knowledge Representation for Clinical Guidelines: with applications to Multimorbidity Analysis and Literature Search
V Carretta Zamborlini – 2017 – dare.ubvu.vu.nl
Page 1. BIBLIOGRAPHY [1] Samina Abidi, Jafna Cox, S. Sibte Raza Abidi, and Michael Shepherd. “Using OWL ontologies for clinical guidelines based comorbid decision support.” In: Proceedings of the Annual Hawaii International Conference on System Sciences. 2011, pp …
Using Cognitive Models
S Kopp, K Bergmann – The Handbook of Multimodal-Multisensor …, 2017 – books.google.com
… For example, mul- timodal speech-gesture behavior has proven to be beneficial for virtual humans and socially expressive robots [Kopp 2017] … 2001], for instance, was based on behavior generators in which generation rules extracted from empirical data were implemented …
Computational Approaches to Dialogue
T David – The Routledge Handbook of Language and Dialogue, 2017 – taylorfrancis.com
… These systems are also called virtual humans (Gratch et al. 2002) … DeVault, D., Georgila, K., Artstein, R., Morbini, F., Traum, D., Scherer, S., Rizzo, AS & Morency, L.-P. (2013), Verbal indicators of psychological distress in interactive dialogue with a virtual human …
Cognitive modulation of appraisal variables in the emotion process of autonomous agents
S Castellanos, LF Rodríguez… – 2017 IEEE 16th …, 2017 – ieeexplore.ieee.org
… of the operating cycle of CMEs since the consistency of the results of other phases (eg, emotion and behavior generation) depend on the … [24] C. Becker-Asano and I. Wachsmuth, “Affective computing with primary and secondary emotions in a virtual human,” Autonomous Agents …
12th IFIP WG 5.5 Working Conference on Virtual Enterprises (PRO-VE)(pp. 263-270). Springer.• Afsarmanesh, H., Sargolzaei, M. and Shadi, M.(2012). A …
M Shadi – pure.uva.nl
Page 1. UvA-DARE is a service provided by the library of the University of Amsterdam (http://dare.uva.nl) UvA-DARE (Digital Academic Repository) Collaboration behavior enhancement in co-development networks Shadi, M. Link to publication …
Computational study of primitive emotional contagion in dyadic interactions
G Varni, I Hupont, C Clavel… – IEEE Transactions on …, 2017 – ieeexplore.ieee.org
Page 1. 1949-3045 (c) 2017 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …
An empirical study on evaluating basic characteristics and adaptability to users of a preventive care system with learning communication robots
D Kitakoshi, T Okano, M Suzuki – Soft computing, 2017 – Springer
… The latter work addressed the effect of the expression of moral emotions (such as gratitude, distress or remorse) on the interaction between people and embodied agents (virtual humans). Using computer simulations of the iterated …
Learning human-robot collaboration insights through the integration of muscle activity in interaction motion models
L Chen, H Wu, S Duan, Y Guan… – 2017 IEEE-RAS 17th …, 2017 – ieeexplore.ieee.org
Page 1. Learning Human-Robot Collaboration Insights through the Integration of Muscle Activity in Interaction Motion Models Longxin Chen, Hongmin Wu, Shuangda Duan, Yisheng Guan and Juan Rojas ? Abstract—Recent …
Agents with Affective Traits for Decision-Making in Complex Environments
B Alfonso Espinosa – 2017 – riunet.upv.es
… Page 32. 1. INTRODUCTION agents with affective characteristics, for example education, entertainment, disaster situations, training, and therapies, which have been improved through simulations with virtual humans. Also, the …
Realizing the Agent Interface
C Draude – Computing Bodies, 2017 – Springer
… The latter two concepts, in particular, are of interest for the embodiment of the Virtual Human … It was the same dynamic that shaped the relationship of speech and writing.”140 The Virtual Human/embodied agent is a computational artifact …