Notes:
The text appears to be discussing the use of virtual human systems in various contexts, such as human-machine interaction, virtual environments, and multimodal systems. The Virtual Human Toolkit (VHT) is mentioned as a collection of modules, tools, and libraries that can be used to create virtual human conversational assistants. The text also mentions the use of multimodal-multisensor interfaces in commercial products and the potential privacy risks associated with such systems. The text also mentions the importance of taking into account individual characteristics and the effects on cognitive performance when developing multimodal systems. The Unity game engine is mentioned as a platform for creating and animating virtual human systems, and the VHT is mentioned as a toolkit for creating and controlling embodied conversational agents. The text also mentions the use of virtual human systems in training scenarios, such as conflict mediation and social skills training, and the use of virtual human systems to study social factors in organizational decision making.
The Virtual Human Toolkit (VHT) is a collection of modules, tools, and libraries designed to aid researchers and developers in creating virtual human conversational characters for use in virtual environments. The VHT can be used in conjunction with the Unity game engine to generate high fidelity embodied agents and integrate them into virtual environments. The VHT can be used to create virtual humans with a wide range of behaviors and abilities, including the ability to speak, gesture, and move around a virtual environment. It can also be used to control the behavior of the virtual humans and to create interactions between them and human users. The VHT is typically used by developers and researchers to create virtual human systems for use in a variety of applications, including education, training, therapy, and entertainment.
See also:
Virtual Human Toolkit 2010 (2x) | Virtual Human Toolkit 2011 (10x) | Virtual Human Toolkit 2012 (11x) | Virtual Human Toolkit 2013 (11x) | Virtual Human Toolkit 2014 (28x) | Virtual Human Toolkit 2015 (40x) | Virtual Human Toolkit 2016 (39x) | Virtual Human Toolkit 2017 (31x) | Virtual Human Toolkit 2018 (31x)
Ubiquitous virtual humans: A multi-platform framework for embodied AI agents in XR
A Hartholt, E Fast, A Reilly, W Whitcup… – … and Virtual Reality …, 2019 – ieeexplore.ieee.org
… realization, text-to-speech generation, and rendering. This work builds on the Virtual Human Toolkit, which has been extended to support computing platforms beyond Win- dows. The resulting framework maintains the modularity …
Virtual human questionnaire for analysis of depression, anxiety and personality
S Jaiswal, M Valstar, K Kusumam… – Proceedings of the 19th …, 2019 – dl.acm.org
… and turn taking rather than understanding the content of conversation. Another popular frame- work is the Virtual Human Toolkit [10] which was developed for general purpose use. It consisted of a number of modules each spe …
Making virtual reality social
A Cordar, Y Heng, F Tavassoli, J Wood… – VR Developer …, 2019 – books.google.com
… To quickly create a virtual human capable of answering basic questions, there are systems such as USC’s Virtual Human Toolkit and the University of Florida’s Virtual People Factory that provide prebuilt templates of common questions and responses for virtual humans …
Virtual humans in augmented reality: A first step towards real-world embedded virtual roleplayers
A Hartholt, S Mozgai, E Fast, M Liewer… – Proceedings of the 7th …, 2019 – dl.acm.org
… It follows the SAIBA framework [6] and was built using the Virtual Human Toolkit [9]. A major new feature introduced in the AR prototype is an auto- mated review phase, which provides users with immediate feedback after finishing an interview …
CasandRA: A Screenplay Approach to Dictate the Behavior of Virtual Humans in AmI Environments
E Stefanidi, A Leonidis, N Partarakis… – … Conference on Human …, 2019 – Springer
… For example, the ICT Virtual Human Toolkit [16, 17] offers a flexible framework for generating high fidelity embodied agents and integrating them in virtual environments … https://doi.org/10.1007/ 978-3-642-40415-3_33CrossRefGoogle Scholar. 17. Virtual Human Toolkit …
CasandRA: A Screenplay Approach to Dictate the Behavior of Virtual Humans in AmI Environments
M Antona – HCI International 2019–Late Breaking Posters: 21st …, 2019 – books.google.com
… For example, the ICT Virtual Human Toolkit [16, 17] offers a flexible framework for generating high fidelity embodied agents and integrating them in virtual environments … https://doi. org/10.1007/978-3-642-40415-3_33 Virtual Human Toolkit. https://vhtoolkit. ict. usc …
The effect of virtual agent warmth on human-agent negotiation
P Prajod, M Al Owayyed, T Rietveld… – Proceedings of the 18th …, 2019 – ii.tudelft.nl
… Lastly, we collected data pertaining to the measures discussed in Section 4.3 and analyzed them. 4.1 Materials For the task of building agents with different warmth levels, we used the Virtual Human Toolkit [17] … 2013. All Together Now: Introducing the Virtual Human Toolkit …
Virtual job interviewing practice for high-anxiety populations
A Hartholt, S Mozgai, AS Rizzo – Proceedings of the 19th ACM …, 2019 – dl.acm.org
… Warehouse • Conference Room The system follows the SAIBA framework [7] and was built using the Virtual Human Toolkit [10] … 2013. All Together Now: Introducing the Virtual Human Toolkit. In 13th International Conference on Intel- ligent Virtual Agents. Edinburgh, UK …
PRIMER: An Emotionally Aware Virtual Agent.
C Gordon, A Leuski, G Benn, E Klassen, E Fast… – IUI …, 2019 – research.ibm.com
… developed using the VH Toolkit: • Virtual Agent: The virtual agent (Ellie) for the PRIMER system was created using the USC ICT Virtual Human (VH) Toolkit [5]. • NPCEditor: Utterance classification was carried out using the NPCEditor, a component of the Virtual Human Toolkit …
Virtual humans: Today and tomorrow
D Burden, M Savin-Baden – 2019 – books.google.com
… AFFECTIVE MIND ARCHITECTURE AND PSI MODELS 128 BECKER-ASANO’S WASABI MODEL 130 UNIVERSITY OF SOUTHERN CALIFORNIA’S VIRTUAL HUMAN TOOLKIT 132 OPENAI 134 INTEGRATION STANDARDS 135 CRITIQUE AND FUTURES 135 …
Conflict mediation in human-machine teaming: using a virtual agent to support mission planning and debriefing
KS Haring, J Tobias, J Waligora… – 2019 28th IEEE …, 2019 – ieeexplore.ieee.org
… D. Virtual Agent Mission Assistance The mission was assisted by an artificial agent, “Chris,” which was created using the Institute for Creative Technolo- gies (ICT) Virtual Human Toolkit [13]. The ICT Virtual Human Toolkit uses …
DS2A: A Dialogue System for Sexual Assault and Harassment Education
D Traum, R Artstein, C Gordon, A Leuski – dialogforgood.org
… The DS2A system is designed as an integrated application in the Unity game engine (https: //unity3d.com); it incorporates several com- ponents from the USC ICT Virtual Human Toolkit (Hartholt et al., 2013), which is publicly available (http://vhtoolkit.ict.usc.edu) …
Systematic Representative Design and Clinical Virtual Reality
S Mozgai, A Hartholt, AS Rizzo – Psychological Inquiry, 2019 – Taylor & Francis
The authors of the article, “Causal Inference in Generalizable Environments: Systematic Representative Design”, boldly announce their core point in the opening line of the abstract stating that, “C…
A research platform for multi-robot dialogue with humans
M Marge, S Nogar, CJ Hayes, SM Lukin… – arXiv preprint arXiv …, 2019 – arxiv.org
… [Hartholtetal.2013] Hartholt, A.; Traum, D.; Marsella, SC; Shapiro, A.; Stratou, G.; Leuski, A.; Morency, L.-P.; and Gratch, J. 2013. All Together Now: Introducing the Virtual Human Toolkit. In Proceedings of the International Conference on Intelligent Virtual Agents. [Hayes et al …
Stay back, clever thing! Linking situational control and human uniqueness concerns to the aversion against autonomous technology
JP Stein, B Liebold, P Ohler – Computers in Human Behavior, 2019 – Elsevier
… artificial voice. Lastly, we added subtle motion capturing animations (eg, nodding, hand gestures) taken from the freely available Virtual Human Toolkit library (Hartholt et al., 2013) to most of the prepared statements. According …
Digital survivor of sexual assault
R Artstein, C Gordon, U Sohail, C Merchant… – Proceedings of the 24th …, 2019 – dl.acm.org
… We therefore built a new, integrated application using the Unity game engine (https://unity3d.com), incorporating existing compo- nents from the USC ICT Virtual Human Toolkit, which is publicly available [5] (http://vhtoolkit.ict.usc.edu) …
Learning individual styles of conversational gesture
S Ginosar, A Bar, G Kohavi, C Chan… – Proceedings of the …, 2019 – openaccess.thecvf.com
… 2 [19] A. Hartholt, D. Traum, SC Marsella, A. Shapiro, G. Stra- tou, A. Leuski, L.-P. Morency, and J. Gratch. All Together Now: Introducing the Virtual Human Toolkit. In 13th In- ternational Conference on Intelligent Virtual Agents, Edin- burgh, UK, Aug. 2013. 3
Using machine learning to generate engaging behaviours in immersive virtual environments
GC Dobre – 2019 8th International Conference on Affective …, 2019 – ieeexplore.ieee.org
… Most of these are based on components such as: the SmartBody animation system [33], Virtual Human Toolkit architecture [12], Behaviour Expression Animation Toolkit [6], Behaviour Markup Language [16] or Nonverbal Behaviour Generator [17] …
Effects of system response delays on elderly humans’ cognitive performance in a virtual training scenario
M Wirzberger, R Schmidt, M Georgi, W Hardt… – Scientific reports, 2019 – nature.com
Observed influences of system response delay in spoken human-machine dialogues are rather ambiguous and mainly focus on perceived system quality. Studies that systematically inspect effects on cognitive performance are still lacking, and effects of individual characteristics are …
Character initiative in dialogue increases user engagement and rapport
U Sohail, C Gordon, R Artstein, D Traum – SemDial Workshop on the …, 2019 – semdial.org
… The baseline DS2A system is designed as an in- tegrated application in the Unity game engine (https://unity3d.com); it incorporates several components from the USC ICT Virtual Human Toolkit (Hartholt et al., 2013), which is pub- licly available (http://vhtoolkit.ict.usc. edu) …
Architecture and Ontology in the Generalized Intelligent Framework for Tutoring: 2018 Update
K Brawner, M Hoffman, B Nye – 7th Generalized Intelligent …, 2019 – books.google.com
… instruction (Rowe et al., 2018). VIRTUAL HUMAN TOOLKIT (VHTK) GIFT now supports 2 character servers, Media Semantics and Virtual Human. Both are available on the Downloads tab of gifttutoring. org. GIFT is now configured …
Virtual reality and augmented reality
Y El Miedany – Rheumatology Teaching, 2019 – Springer
… in real time to simulate a conversational flow [31]. In the development of SimCoach and the SimSensei Kiosk, a Virtual Human Toolkit (VHTK) has been utilized. This includes Multisense, tracks and analyses users’ facial expressions …
Yet another low?level agent handler
F Nunnari, A Heloir – Computer Animation and Virtual Worlds, 2019 – Wiley Online Library
… 2.1 Virtual human animation frameworks. A well established platform is the Virtual Human Toolkit (VHT),1 which gathers an extensive collection of modular tools aimed at creating and controlling the behavior of embodied conversational agents (ECAs) …
BEYOND MONOLOGUES
V Ramanarayanan, K Evanini… – … Technologies to Score …, 2019 – books.google.com
… While there are multiple academic implementations of spoken and multimodal dialog systems available, such as Olympus (Bohus, Raux, Harris, Eskenazi, & Rudnicky, 2007), Alex (Juc?rc?iek, Dušek, Plátek, & Žilka, 2014), Virtual Human Toolkit (Hartholt et al., 2013), and …
Motion Data and Model Management for Applied Statistical Motion Synthesis.
E Herrmann, H Du, A Antakli, D Rubinstein, R Schubotz… – STAG, 2019 – diglib.eg.org
… scattered data interpolation. The ICT Virtual Human Toolkit [HTM ? 13] provides a complete system for the creation of autonomous human agents and the visu- alization of the behavior in a 3D engine. Shoulson et. al integrate …
Systematic representative design: A reply to commentaries
LC Miller, DC Jeong, L Wang, SJ Shaikh… – Psychological …, 2019 – Taylor & Francis
… But, as Mozgai et al. (this issue) suggest, some toolkits are already available for users with no or minimal cost (eg, the Virtual Human Toolkit produced by USC’s Institute for Creative Technologies, https://vhtoolkit.ict.usc.edu) …
A Photo-realistic Voice-bot
J Alexander – 2019 – upcommons.upc.edu
Page 1. MASTER THESIS IN ARTIFICIAL INTELLIGENCE A Photo-realistic Voice-bot Author: Jorge Alexander Supervisor: Núria Castell Ariño, Computer Science Department, UPC Facultat d’Informàtica de Barcelona (FIB) Universitat …
SECTION 1: UNDERSTANDING THE AIS PROBLEM SPACE
R Sottilare – ADAPTIVE INSTRUCTIONAL SYSTEM (AIS) …, 2019 – researchgate.net
… The interface for each GIFT course is unique but has a standard virtual human window and a textual interface as shown in Figure 1. The engine for the GIFT virtual human is based on the virtual human toolkit (Gratch, Harthold, Dehghani, & Marsella, 2013) …
Automotive multimodal human-machine interface
D Schnelle-Walka, S Radomski – The Handbook of Multimodal …, 2019 – dl.acm.org
Page 1. 12Automotive Multimodal Human-Machine Interface Dirk Schnelle-Walka, Stefan Radomski 12.1 Introduction The majority of user interfaces in the automotive domain were not developed as the result of user-centered …
Nonverbal Behavior in
A Cafaro, C Pelachaud… – The Handbook of …, 2019 – books.google.com
… The Virtual Human Toolkit The Virtual Human Toolkit4 (VHT)[Hartholtet al … Page 262. 236 Chapter 6 Nonverbal Behavior in Multimodal Performances Figure 6.5 Rachel (left) and Brad (right) are the two representative ECAs of the Virtual Human Toolkit …
Nonverbal behavior in multimodal performances
A Cafaro, C Pelachaud, SC Marsella – The Handbook of Multimodal …, 2019 – dl.acm.org
Page 1. 6Nonverbal Behavior in Multimodal Performances Angelo Cafaro, Catherine Pelachaud, Stacy C. Marsella 6.1 Introduction The physical, nonverbal behaviors that accompany face-to-face interaction convey a wide variety …
Challenge discussion: advancing multimodal dialogue
J Allen, E André, PR Cohen, D Hakkani-Tür… – The Handbook of …, 2019 – dl.acm.org
Page 1. 5Challenge Discussion: Advancing Multimodal Dialogue James Allen, Elisabeth André, Philip R. Cohen, Dilek Hakkani-Tür, Ronald Kaplan, Oliver Lemon, David Traum 5.1 Introduction Arguably, dialogue management …
Embedded multimodal interfaces in robotics: applications, future trends, and societal implications
EA Kirchner, SH Fairclough, F Kirchner – The Handbook of Multimodal …, 2019 – dl.acm.org
Page 1. 13Embedded Multimodal Interfaces in Robotics: Applications, Future Trends, and Societal Implications Elsa A. Kirchner, Stephen H. Fairclough, Frank Kirchner 13.1 Introduction In the past, robots were primarily used …
Multimodal dialogue processing for machine translation
A Waibel – The Handbook of Multimodal-Multisensor Interfaces …, 2019 – dl.acm.org
Page 1. 14Multimodal Dialogue Processing for Machine Translation Alexander Waibel 14.1 Introduction Humans converse with each other to communicate and to develop ideas interac- tively in the presence of imprecise and under-specified information …
Multimodal conversational interaction with robots
G Skantze, J Gustafson, J Beskow – The Handbook of Multimodal …, 2019 – dl.acm.org
Page 1. 2Multimodal Conversational Interaction with Robots Gabriel Skantze, Joakim Gustafson, Jonas Beskow 2.1 Introduction Being able to communicate with machines through spoken interaction has been a long-standing vision in both science fiction and research labs …
TOWARD C
BD Nye, R Thaker, N Surana… – … for Intelligent Tutoring …, 2019 – books.google.com
… A second gateway (GIFT_VHT_Converter) exists to convert messages from GIFT format to a Virtual Human Toolkit (VHT) format consumed by a VHT Tutor Con- troller. Both of these gateways connect to an ActiveMQ gateway that ties into the GIFT main ActiveMQ service …
Medical and health systems
D Sonntag – The Handbook of Multimodal-Multisensor Interfaces …, 2019 – dl.acm.org
Page 1. IIIPART EMERGING TRENDS AND APPLICATIONS Page 2. Page 3. 11Medical and Health Systems Daniel Sonntag 11.1 Introduction In this chapter, we discuss the trends of multimodal-multisensor interfaces for medical and health systems …
ET-GAN: Cross-language emotion transfer based on cycle-consistent generative adversarial networks
X Jia, J Tai, H Zhou, Y Li, W Zhang, H Du… – arXiv preprint arXiv …, 2019 – arxiv.org
Page 1. arXiv:1905.11173v3 [cs.SD] 5 Mar 2020 ET-GAN: Cross-Language Emotion Transfer Based on Cycle-Consistent Generative Adversarial Networks Xiaoqi Jia* and Jianwei Tai* 1 and Hang Zhou and Yakai Li and Weijuan Zhang and Haichao Du and Qingjia Huang 23 …
Architectural Middleware that Supports Building High-performance, Scalable, Ubiquitous, Intelligent Personal Assistants
OJ Romero – arXiv preprint arXiv:1906.02068, 2019 – arxiv.org
… The strongest middleware candidate we took into consideration to develop IPAs was VHT (Virtual Human Toolkit) [20]. VHT is a collection of modules, tools, and libraries designed to aid and support researchers with the creation of virtual human conversational assistants …
My Science Tutor and the MyST Corpus
W Ward, R Cole, S Pradhan – 2019 – researchgate.net
Page 1. A. Project Overview My Science Tutor and the MyST Corpus Wayne Ward, Ron Cole and Sameer Pradhan Boulder Learning Inc. © 2019 Boulder Learning Inc. All Rights Reserved My Science Tutor: Project Overview …
Socially-Aware Dialogue System
R Zhao – 2019 – lti.cs.cmu.edu
Page 1. Socially-Aware Dialogue System Ran Zhao CMU-LTI-19-008 Language Technologies Institute School of Computer Science Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA 15213 www.lti.cs.cmu.edu Thesis …
Early integration for movement modeling in latent spaces
R Hornung, N Chen, P van der Smagt – The Handbook of Multimodal …, 2019 – dl.acm.org
Page 1. 8Early Integration for Movement Modeling in Latent Spaces Rachel Hornung, Nutan Chen, Patrick van der Smagt 8.1 Introduction In this chapter, we will show how techniques of advanced machine and deep learn- ing …
Multimodal databases
M Valstar – The Handbook of Multimodal-Multisensor Interfaces …, 2019 – dl.acm.org
Page 1. 10Multimodal Databases Michel Valstar 10.1 Introduction In the preceding chapters, we have seen many examples of Multimodal, Multisen- sor Interfaces (MMIs). Almost all of these interfaces are implemented as computer …
Commercialization of multimodal systems
PR Cohen, R Tumuluri – The Handbook of Multimodal-Multisensor …, 2019 – dl.acm.org
Page 1. 15Commercialization of Multimodal Systems Philip R. Cohen, Raj Tumuluri 15.1 Introduction This chapter surveys the broad and accelerating commercial activity in build- ing products incorporating multimodal-multisensor interfaces …
Ergonomics for the design of multimodal interfaces
A Heloir, F Nunnari, M Bachynskyi – The Handbook of Multimodal …, 2019 – dl.acm.org
Page 1. IIPART MULTIMODAL BEHAVIOR Page 2. Page 3. 7Ergonomics for the Design of Multimodal Interfaces Alexis Heloir, Fabrizio Nunnari, Myroslav Bachynskyi 7.1 Introduction There are many ways a machine can infer …
Design and implementation of embodied conversational agents
F Geraci – 2019 – rucore.libraries.rutgers.edu
Page 1. c 2019 Fernando Geraci ALL RIGHTS RESERVED Page 2. DESIGN AND IMPLEMENTATION OF EMBODIED CONVERSATIONAL AGENTS By FERNANDO GERACI A thesis submitted to the School of Graduate Studies Rutgers, The State University of New Jersey …
Privacy concerns of multimodal sensor systems
G Friedland, MC Tschantz – The Handbook of Multimodal-Multisensor …, 2019 – dl.acm.org
Page 1. 16Privacy Concerns of Multimodal Sensor Systems Gerald Friedland, Michael Carl Tschantz 16.1 Introduction This chapter explains that ignoring the privacy risks introduced by multimodal sys- tems could have severe consequences for society in the long term …
Multimodal integration for interactive conversational systems
M Johnston – The Handbook of Multimodal-Multisensor Interfaces …, 2019 – dl.acm.org
Page 1. IPART MULTIMODAL LANGUAGE AND DIALOGUE PROCESSING Page 2. Page 3. 1Multimodal Integration for Interactive Conversational Systems Michael Johnston 1.1 Introduction This chapter discusses the challenges …
Motion Coordination for Large Multi-Robot Teams in Obstacle-Rich Environments
W Hönig – 2019 – search.proquest.com
Page 1. Motion Coordination for Large Multi-Robot Teams in Obstacle-Rich Environments Ph.D. Dissertation submitted by Wolfgang Hönig May 2019 Committee: Nora Ayanian (Chair) Gaurav S. Sukhatme Sven Koenig Vijay Kumar (Outside Member) …
Software platforms and toolkits for building multimodal systems and applications
M Feld, R Ne?elrath, T Schwartz – The Handbook of Multimodal …, 2019 – dl.acm.org
Page 1. 4Software Platforms and Toolkits for Building Multimodal Systems and Applications Michael Feld, Robert Neßelrath, Tim Schwartz 4.1 Introduction This chapter introduces various concepts needed for the realization of multimodal systems …
Situated interaction
D Bohus, E Horvitz – The Handbook of Multimodal-Multisensor Interfaces …, 2019 – dl.acm.org
Page 1. 3Situated Interaction Dan Bohus, Eric Horvitz 3.1 Introduction Interacting with computers via natural language is an enduring aspiration in artificial intelligence. The earliest attempts at dialog between computers and …
Standardized representations and markup languages for multimodal interaction
R Tumuluri, D Dahl, F Paternò… – The Handbook of …, 2019 – dl.acm.org
Page 1. 9Standardized Representations and Markup Languages for Multimodal Interaction Raj Tumuluri, Deborah Dahl, Fabio Patern`o, Massimo Zancanaro 9.1 Introduction This chapter discusses some standard languages …