Notes:
Behavior generation in virtual humans refers to the creation of lifelike behaviors and actions for virtual characters, such as those used in video games, animation, or virtual environments. This can involve creating a wide range of behaviors, such as walking, talking, or gesturing, in a way that is realistic and believable. To generate behaviors for virtual humans, designers and developers use a combination of animation techniques, motion capture data, and artificial intelligence algorithms. The goal of behavior generation in virtual humans is to create characters that are able to interact with users and their environment in a natural and realistic way.
The references below discuss the use of virtual humans in various applications, including training, games, and interactions with real humans. It also mentions the challenges of generating realistic and adaptable behavior in virtual humans, and the use of various technologies such as speech recognition, natural language processing, and nonverbal behavior generation. The text also mentions the use of domain-independent planners and authoring tools to support the creation and control of virtual human behavior. In some cases, virtual humans are used to provide prompts or assistance to people with cognitive disabilities. The text also mentions the integration of affordances and the use of interaction meshes to generate natural behaviors for virtual characters during ongoing user interactions.
Domain-independent planners are algorithms that can be used to generate behaviors for virtual humans in a variety of different contexts or domains. These planners are designed to be flexible and adaptable, and can be used to generate behaviors that are appropriate for different situations or environments. They work by taking in information about the current context, such as the virtual human’s goals, the available resources, and the constraints of the environment, and using this information to generate a plan or sequence of actions that the virtual human can follow. This allows the virtual human to adapt its behavior in a more flexible and intelligent way, rather than being limited to a fixed set of behaviors that are pre-determined by the designer. Domain-independent planners are often used in virtual human research as a way to automate the process of behavior generation, while still allowing for a high degree of control and customization.
Interaction meshes are a method used to generate natural behaviors for virtual characters during ongoing interactions with users. They are based on the idea of representing behaviors as a set of interconnected nodes that represent different behaviors, actions, or states. The nodes are connected by edges that represent the transitions between behaviors or actions. The interaction mesh is then used to guide the generation of behaviors for the virtual character based on the current context and the user’s actions. This allows the virtual character to adapt its behavior in a flexible and natural way during an interaction. Interaction meshes are often used in the development of virtual humans or other interactive virtual characters, such as virtual assistants or conversational agents, to enable them to interact with users in a natural and believable way.
See also:
The benefits of virtual humans for teaching negotiation
J Gratch, D DeVault, G Lucas – International Conference on Intelligent …, 2016 – Springer
… LNCS, vol. 8108, pp. 368–381. Springer, Heidelberg (2013)CrossRefGoogle Scholar. 26. Lee, J., Marsella, SC: Nonverbal behavior generator for embodied conversational agents … DeVault, D., J. Mell, Gratch, J.: Toward natural turn-taking in a virtual human negotiation agent …
Automatic agent generation for IoT-based smart house simulator
W Lee, S Cho, P Chu, H Vu, S Helal, W Song… – Neurocomputing, 2016 – Elsevier
… Table 3. Motivation selection and behavior generation. Time, Prior motivation, Action. 01:15:06, Sleepiness (low value), Climb on the bed … Notes Comput. Sci., 2969 (2004), pp. 55-67. [25] D. Sevin, D. ThalmannA motivational model of action selection for virtual humans. Comput. Gr …
Graphical models for social behavior modeling in face-to face interaction
A Mihoub, G Bailly, C Wolf, F Elisei – Pattern Recognition Letters, 2016 – Elsevier
… HSMMs). We outperform the baseline in both measures of performance, ie interaction unit recognition and behavior generation … behavior. As a result, DBN leads to better performances in both IU recognition and behavior generation …
INGREDIBLE: A platform for full body interaction between human and virtual agent that improves co-presence
E Bevacqua, R Richard, J Soler, P De Loor – Proceedings of the 3rd …, 2016 – dl.acm.org
… computed. Once the database of reference gestures and the discriminant features are deter- mined, the on-line process can run during the interaction to recognize the (real or virtual) human’s gestures and expres- sivity in realtime …
Context Aware Human-Robot and Human-Agent Interaction
N Magnenat-Thalmann, J Yuan, D Thalmann, BJ You – 2016 – Springer
… springer. com) Page 6. Preface This book is the first book to describe how Autonomous Virtual Humans and Social Robots can interact with real people, be aware of these people and the environment and react to various situations …
A Virtual Emotional Freedom Therapy Practitioner
H Ranjbartabar, D Richards – … of the 2016 International Conference on …, 2016 – dl.acm.org
… Our virtual EFT practitioner, known as EFFIE the Emotional Freedom FrIEnd, is based on Virtual Human Toolkit components [6 … The dialogue engine sends BML (Behavior Markup Language) message to NVBG (NonVerbal-Behavior-Generator) module containing the line the …
Socially-aware animated intelligent personal assistant agent
Y Matsuyama, A Bhardwaj, R Zhao, O Romeo… – Proceedings of the 17th …, 2016 – aclweb.org
… is sent to BEAT, a non-verbal behavior generator (Cassell et al., 2004), which tailors a behavior plan (including relevant hand gestures, eye gaze, head nods, etc.) and outputs the plan as BML (Behavior Markup Language), which is a part of the Virtual Human Toolkit (Hartholt et …
Semi-situated learning of verbal and nonverbal content for repeated human-robot interaction
I Leite, A Pereira, A Funkhouser, B Li… – Proceedings of the 18th …, 2016 – dl.acm.org
… Keywords Long-term human-robot interaction; crowdsourcing; content authoring; multimodal behavior generation … They developed an authoring tool that facilitates collaborative development of virtual humans by two groups of end-users: domain experts (educators) and domain …
Learning human-robot interactions from human-human demonstrations (with applications in lego rocket assembly)
D Vogt, S Stepputtis, R Weinhold… – 2016 IEEE-RAS 16th …, 2016 – ieeexplore.ieee.org
… IEEE, nov 2013, pp. 3257–3264. [3] D. Vogt, B. Lorenz, S. Grehl, and B. Jung, “Behavior generation for interactive virtual humans using context-dependent interaction meshes and automated constraint extraction,” Computer Animation and Virtual Worlds, vol. 26, no. 3-4, pp …
Aliveness metaphor for an evolutive gesture interaction based on coupling between a human and a virtual agent
P De Loor, R Richard, J Soler, E Bevacqua – Proceedings of the 29th …, 2016 – dl.acm.org
… [6] S. Kopp, J. Allwood, K. Grammer, E. Ahlsen, and T. Stocksmeier. Modeling embodied feedback with virtual humans. In Modeling Communication with Robots and Virtual Humans, volume 4930 of LNCS, pages 18–37, 2008. [7] L.-P. Morency, I. de Kook, and J. Gratch …
Ask Alice: an artificial retrieval of information agent
M Valstar, T Baur, A Cafaro, A Ghitulescu… – Proceedings of the 18th …, 2016 – dl.acm.org
… speech and visual appearance of the virtual human. The ARIA Framework makes use of communication and representation standards wherever pos- sible. For example, by adhering to FML and BML we are able to plug in two different visual behaviour generators, Greta [4] or …
Simulink Toolbox for Real-time Virtual Character Control
U Bernardet, M Saberi, S DiPaola – International Conference on Intelligent …, 2016 – Springer
… We are motivated by the experience that control systems for virtual humans tend to become complex rapidly and that graphical tools are very useful in supporting … The blocks in the toolbox fall into the broad categories of “input/sources”, utilities, and output behavior generation …
Breaking bad behaviors: A new tool for learning classroom management using virtual reality
JL Lugrin, ME Latoschik, M Habel, D Roth, C Seufert… – Frontiers in …, 2016 – frontiersin.org
This article presents an immersive Virtual Reality (VR) system for training classroommanagement skills, with a specific focus on learning to manage disruptive student behaviourin face-to-face, one-to-many teaching scenarios. The core of the system is a real-time 3Dvirtual simulation …
The design of virtual audiences: noticeable and recognizable behavioral styles
N Kang, WP Brinkman, MB van Riemsdijk… – Computers in Human …, 2016 – Elsevier
… When individuals are exposed to a virtual environment and perform in front of a group of virtual humans, their belief, anxiety, and performance can be affected … Besides the behavior as individual virtual humans, audience members also respond to each other’s behavior …
Conversational interfaces: devices, wearables, virtual agents, and robots
M McTear, Z Callejas, D Griol – The Conversational Interface, 2016 – Springer
… ICT) Virtual Human Toolkit is a collection of modules, tools, and libraries designed to aid and support researchers and developers with the creation of ECAs. It provides modules for multimodal sensing, character editing and animation, and nonverbal behavior generation …
Timed Petri nets for fluent turn-taking over multimodal interaction resources in human-robot collaboration
C Chao, A Thomaz – The International Journal of Robotics …, 2016 – journals.sagepub.com
The goal of this work is to develop computational models of social intelligence that enable robots to work side by side with humans, solving problems and achiev…
Towards Truly Autonomous Synthetic Characters with the Sigma Cognitive Architecture
V Ustun, PS Rosenbloom – Integrating Cognitive Architectures into …, 2016 – igi-global.com
… However, progress in behavior generation has been more mixed … in their environment based on what they know and perceive, eg reacting and appropriately responding to the events around them; (3) interact in a natural way with both real and other virtual humans using verbal …
Modeling grounding for interactive social companions
G Mehlmann, K Janowski, E André – KI-Künstliche Intelligenz, 2016 – Springer
… The stepwise execution of the interaction model enables the precise alignment and incremental interleaving of input processing and behavior generation … Lee J, Marsella S, Traum D, Gratch J, Lance B (2007) The rickel gaze model: a window on the mind of a virtual human …
Multimodal and multi-party social interactions
Z Yumak, N Magnenat-Thalmann – Context Aware Human-Robot and …, 2016 – Springer
… realization of the planned behaviors at the level of motor controls for the robots and using computer animation techniques for virtual humans. In this chapter, we are mainly interested in steps 2, 3, and 4, which stays between low-level sensing and behavior generation and deals …
Modelling multi-issue bargaining dialogues: Data collection, annotation design and corpus
V Petukhova, C Stevens, H de Weerd… – Proceedings of the …, 2016 – aclweb.org
Page 1. Modelling Multi-Issue Bargaining Dialogues: Data Collection, Annotation Design and Corpus Volha Petukhova1, Christopher Stevens2, Harmen de Weerd2, Niels Taatgen2, Fokie Cnossen2, Andrei Malchanau1 1Saarland …
Hand gesture synthesis for conversational characters
M Neff – Handbook of Human Motion, 2016 – Springer
… Duckworth, London Lee J, Marsella S (2006) Nonverbal behavior generator for embodied conversational agents … SC, pp 151–158 Van Welbergen H, Reidsma D, Ruttkay Z, Zwiers J (2010) Elckerlyc-A BML realizer for contin- uous, multimodal interaction with a virtual human …
The Effects of Interpersonal Attitude of a Group of Agents on User’s Presence and Proxemics Behavior
A Cafaro, B Ravenet, M Ochs… – ACM Transactions on …, 2016 – dl.acm.org
… However, only a few have dealt with autonomous behavior generation, and in those cases, the agents’ exhibited be- havior should be … INTRODUCTION Many real-time interactive simulations involve the use of large open worlds populated by autonomous virtual humans1 [Ennis …
Design and Study of Emotions in Virtual Humans for Assistive Technologies
A Malhotra – 2016 – uwspace.uwaterloo.ca
Page 1. Design and Study of Emotions in Virtual Humans for … Page 3. iii Abstract This thesis presents the design and study of emotionally aligned prompts given by virtual humans for persons with cognitive disabilities such as Alzheimer’s disease and related dementias (ADRD) …
Authoring directed gaze for full-body motion capture
T Pejsa, D Rakita, B Mutlu, M Gleicher – ACM Transactions on Graphics …, 2016 – dl.acm.org
Page 1. ACM Reference Format Pejsa, T., Rakita, D., Mutlu, B., Gleicher, M. 2016. Authoring Directed Gaze for Full-Body Motion Capture. ACM Trans. Graph. 35, 6, Article 161 (November 2016), 11 pages. DOI = 10.1145/2980179.2982444 …
CANVAS: computer-assisted narrative animation synthesis.
M Kapadia, S Frey, A Shoulson… – Symposium on …, 2016 – ashoulson.com
… These methods represent tradeoffs between user-driven specification and automatic behavior generation. The work of Kwon et al … Automation. Total-order planners [FN71, HNR72] are promis- ing for automated behavior generation [FTT99,KSRF11,SGKB13, RB13] …
Emotional appraisal engines for games
J Broekens, E Hudlicka, R Bidarra – Emotion in Games, 2016 – Springer
… Abstract. Affective game engines could support game development by providing specialized emotion sensing (detection), emotion modeling, emotion expression and affective behavior generation explicitly tailored towards games …
Generating robot gaze on the basis of participation roles and dominance estimation in multiparty interaction
YI Nakano, T Yoshino, M Yatsushiro… – ACM Transactions on …, 2016 – dl.acm.org
… Then we applied our findings to human- robot interaction. To design robot gaze behaviors, we analyzed gaze transitions with respect to participation roles and dominance and implemented gaze-transition models as robot gaze behavior generation rules …
project SENSE–Multimodal Simulation with Full-Body Real-Time Verbal and Nonverbal Interactions
H Miri, J Kolkmeier, PJ Taylor, R Poppe… – … Conference on Intelligent …, 2016 – Springer
… The Virtual Human Toolkit (VHTK) is a collection of modules, tools, and libraries, to allow for the creation … visual sensing, nonverbal behavior understanding, speech recognition, natural language processing, dialogue management, nonverbal behavior generation and realization …
A conversational agent that reacts to vocal signals
D Formolo, T Bosse – International Conference on Intelligent Technologies …, 2016 – Springer
… Furthermore, the virtual human SimSensei Kiosk uses voice, speech and other features to analyse user emotions in the context of healthcare decision support … Next, this output is provided to the Behaviour Generation Module, which generates an appropriate response to the user …
Virtual human technologies for cognitively-impaired older adults’ care: the LOUISE and Virtual Promenade experiments
P Wargnier – 2016 – pastel.archives-ouvertes.fr
… Virtual human technologies for cognitively-impaired older adults’ care : the LOUISE and Virtual Promenade experiments Pierre Wargnier To cite this version: Pierre Wargnier … Page 3. Page 4. Pierre WARGNIER Virtual human technologies for cognitively-impaired older adults’ …
Designing Emotionally Expressive Behaviour: Intelligibility and Predictability in Human-Robot Interaction
J Novikova – 2016 – researchportal.bath.ac.uk
Page 1. University of Bath PHD Designing Emotionally Expressive Behaviour: Intelligibility and Predictability in Human-Robot Interaction Novikova, Jekaterina Award date: 2016 Awarding institution: University of Bath Link to publication …
A Computational Framework for Expressive, Personality-based, Non-verbal Behaviour for Affective 3D Character Agents
M Saberi – 2016 – summit.sfu.ca
Page 1. A Computational Framework for Expressive, Personality-based, Non-verbal Behaviour for Affective 3D Character Agents by Maryam Saberi B.Sc., Najafabad Azad University, Iran, 2003 MA, Chalmers University of Technology, Sweden, 2009 …
ACUMEN: Activity-centric crowd authoring using influence maps
A Krontiris, KE Bekris, M Kapadia – Proceedings of the 29th International …, 2016 – dl.acm.org
… is difficult to author. The use of domain-independent planners [23] is a promising direction for automated behavior generation, which provide automation while sacrificing authoring con- trol. Influence Maps. The concept has …
Providing Physical Appearance and Behaviour to Virtual Characters
M del Puy Carretero, HV Diez, S García… – … on Articulated Motion …, 2016 – Springer
… D., Ruttkay, ZM, Zwiers, J.: Elckerlyc: a BML realizer for continuous, multimodal interaction with a virtual human. J. Multimodal User Interfaces 3(4), 271–284 (2010). ISSN: 1783-7677CrossRef Google Scholar. 12. Lee, J., Marsella, SC: Nonverbal behavior generator for embodied …
Using Virtual Humans to Understand Real Ones
K Hoemann, B Rezaei, SC Marsella… – arXiv preprint arXiv …, 2016 – arxiv.org
Page 1. Using Virtual Humans to Understand Real Ones … Qualifying considerations as well as possible next steps are further discussed in light of these exploratory findings. Index Terms—Non-Verbal Behaviour, Virtual Human, Gaze Pattern, Gesture, Facial Expression …
Things that Make Robots Go HMMM: Heterogeneous Multilevel Multimodal Mixing to Realise Fluent, Multiparty, Human-Robot Interaction
NCEDD Reidsma – researchgate.net
… A. Fluent Behaviour Generation Generating behaviour that can adapt fluently to external influences introduces several requirements for our system architecture … control engine (2), which executes the actual behaviour on an agent embodiment (for instance, a virtual human or a …
An event-centric approach to authoring stories in crowds
M Kapadia, A Shoulson, C Steimer… – Proceedings of the 9th …, 2016 – dl.acm.org
… Being that our virtual worlds consist of large numbers of virtual humans and objects, how do we encapsulate our crowds to make them avail- able to an … These methods represent different tradeoffs between the ease of user specification and the autonomy of behavior generation …
D4. 7 1st Expressive Virtual Characters
F Yang, C Peters – 2016 – prosociallearn.eu
… The ICT Virtual Human Toolkit8 is a collection of modules, tools, and libraries designed to aid and support researchers and developers with the creation of virtual human conversational characters performed at the University of Southern California (USC) …
A virtual emotional freedom practitioner to deliver physical and emotional therapy
H Ranjbartabar – 2016 – researchonline.mq.edu.au
… NLP Neuro-Linguistic Programming NLU Natural Language Understanding NVBG NonVerbal Behaviour Generator PTSD Post Trauma Stress Disorder TFT Thought Field Therapy VH Virtual Human VP Virtual Patient VRET Virtual Reality Exposure Therapy VT Virtual Therapist …
Using the affordance concept for model design in agent-based simulation
F Klügl – Annals of Mathematics and Artificial Intelligence, 2016 – Springer
… There is no “standard” way for integrating the abstract affordance concept into the agent’s behavior generation: [31] integrates affordances into a BDI architecture, [36] use affordances for capturing scenario specific information in PMFServ, [21] create options that afford driving …
Modeling and representing dramatic situations as paradoxical structures
N Szilas – Digital Scholarship in the Humanities, 2016 – academic.oup.com
Abstract. The concept of dramatic situation is important in dramaturgy and narratology. In the domain of story generation and interactive digital storytelling,.
Westminster Serious Games Platform (wmin-SGP) a tool for real-time authoring of roleplay simulations for learning
D Economou, I Doumanis, F Pedersen… – … on Future Intelligent …, 2016 – clok.uclan.ac.uk
… 3. The Non-verbal Behaviour Generator (NVBG), a rule-based component that analyses a VH response and proposes appropriate non-verbal … 4. The Speech Components (SC), a Text-to-Speech (TTS) component generates the virtual human’s speech and the timing needed …
Agent development as a strategy shaper
A Elmalech, D Sarne, N Agmon – Autonomous Agents and Multi-Agent …, 2016 – Springer
… strategy of their developers. The effectiveness of using PDAs as human behavior generators depends primarily on the similarity between the behaviors exhibited by PDAs and their developers. Nevertheless, despite the great …
A study of the use of Natural Language Processing for Conversational Agents
RS Wilkens – 2016 – lume.ufrgs.br
… Figura 1.1 General architecture of a conversational agent….. 14 Figura 2.1 The virtual human system architecture … In this sense there are virtual humans whose goal is an interaction similar to human interaction …
Dialogue systems and dialogue management
D Burgan – 2016 – apps.dtic.mil
Page 1. UNCLASSIFIED UNCLASSIFIED Dialogue Systems & Dialogue Management Deeno Burgan National Security & ISR Division Defence Science and Technology Group DST-Group-TR-3331 ABSTRACT A spoken dialogue …
Design a simulated multimedia enriched immersive learning environment (SMILE) for nursing care of dementia patient
S Roomkham – 2016 – eprints.qut.edu.au
… Figure 4.6 The Virtual Human Activity Diagram ….. 70 Figure 4.7 The Activity Diagram of Keyword Extraction ….. 71 Figure 4.8 Screen shots of SMILE Framework in Clinical Situation. …. 74 …
Capturing and Animating Hand and Finger Motion for 3D Communicative Characters
NS Wheatland – 2016 – escholarship.org
… characters. Though many methods have been developed over the years to facilitate aspects of 3D character anima- tion, creating realistic virtual humans is still a challenge. This is partly because of the way virtual characters move …
Designing and Testing an Experimental Framework of Affective Intelligent Agents in Healthcare Training Simulations
M Loizou – 2016 – wlv.openrepository.com
… 9 Figure 2.3 – The PEFiC model (Van Vugt, Hoorn et al. 2004) … 17 Figure 2.4 – Agent design for the Virtual Humans project (Kenny, Hartholt et al. 2007) …. 20 Figure 2.5 – Goal hierarchy of an explainable BDI agent …
Gestures become more informative when communication is unsuccessful
MW Hoetjes – 2016 – repository.ubn.ru.nl
… Jennifer Gerwing. Physicians’ and patients’ use of body-oriented gestures in primary care consultations Jacqueline Hemminghaus and Stefan Kopp. Adaptive Behavior Generation for Multimodal Child- Robot-Interaction Marieke Hoetjes …