Notes:
The Articulated Communicator Engine (ACE) is a software platform that allows for the creation and visualization of animated, embodied agents that are capable of generating human-like multimodal utterances. It operates at the behavior realization layer and has replaced the use of lexicons of canned behaviors with an on-the-spot production of flexibly planned behavior. ACE has been used in a variety of applications, including virtual agents and robots, and is considered one of the most sophisticated multi-modal schedulers. It is implemented in a number of systems, including the MAX system, and has been used in a variety of research projects. It is also capable of generating specific types of hand gestures, such as deictic and iconic gestures.
The Multimodal Utterance Representation Markup Language (MURML) is a markup language used to represent and annotate multimodal utterances, which are combinations of speech and nonverbal behaviors such as gestures and facial expressions. MURML allows for the specification of the timing and coordination of these behaviors in order to create natural and expressive communication. It is used to represent the output of a behavior generation system, such as the Articulated Communicator Engine (ACE), and can be used to control the behavior of virtual agents or robots. MURML is designed to be flexible and extensible, allowing for the annotation of a wide range of behaviors and the creation of new behaviors as needed. It is used in a variety of applications, including virtual agents, robots, and human-computer interaction.
References:
See also:
Towards an integrated model of speech and gesture production for multi-modal robot behavior
M Salem, S Kopp, I Wachsmuth… – … Symposium in Robot …, 2010 – ieeexplore.ieee.org
… We propose a robot control architecture building upon the Articulated Communicator Engine (ACE) that was developed to allow virtual agents to flexibly realize planned multi-modal behavior representations on the spot. Our …
Towards meaningful robot gesture
M Salem, S Kopp, I Wachsmuth, F Joublin – Human centered robot systems, 2009 – Springer
… Being one of the most sophisticated multi-modal schedulers, the Articulated Communicator Engine (ACE) has replaced the use of lexicons of canned behaviors with an on-the-spot production of flexibly planned be- havior representations …
Generating robot gesture using a virtual agent framework
M Salem, S Kopp, I Wachsmuth… – 2010 IEEE/RSJ …, 2010 – ieeexplore.ieee.org
… We describe an approach to enable the humanoid robot ASIMO to flexibly produce communicative gestures at run-time, building upon the Articulated Communicator Engine (ACE) that was developed to allow virtual agents to realize planned behavior representations on the spot …
Preprint: Towards an Integrated Model of Speech and Gesture
M Salem, S Kopp, I Wachsmuth – honda-ri.de
… We propose a robot control architecture building upon the Articulated Communicator Engine (ACE) that was developed to allow virtual agents to flexibly realize planned multi-modal behavior representations on the spot. Our …
ION framework–a simulation environment for worlds with virtual agents
M Vala, G Raimundo, P Sequeira, P Cuba… – … Workshop on Intelligent …, 2009 – Springer
… There are currently systems3 that use these markup languages like SmartBody [15] or ACE (ARticulated Communicator Engine) [10] … Kopp, S.: Articulated communicator engine (ace) (2000), http://www.techfak.uni-bielefeld.de/skopp/max.html (last seen April 2009) 11 …
The behavior markup language: Recent developments and challenges
H Vilhjálmsson, N Cantelmo, J Cassell… – … Workshop on Intelligent …, 2007 – Springer
… 5.3 Behavior Realizers ACE: The Articulated Communicator Engine6. The Articulated Communicator Engine (ACE) is a behavior realization engine that allows the modeling of virtual 4 Developed at CADIA, Reykjavík University. 5 Developed jointly at DFKI and UC Davis …
Embodied communication in humans and machines
I Wachsmuth, M Lenzen, G Knoblich – 2008 – books.google.com
… Abbreviations % REC percent recurrence ACA activity based communication analysis ACE articulated communicator engine ACQ augmented competitive queuing aIPS anterior intraparietal sulcus APML affective presentation markup language BEAT behavior expression …
Generation and evaluation of communicative robot gesture
M Salem, S Kopp, I Wachsmuth, K Rohlfing… – International Journal of …, 2012 – Springer
… 3, we describe our multimodal behavior re- alizer, the Articulated Communicator Engine (ACE), which implements the speech-gesture production model originally designed and implemented for the virtual human agent Max and is now used for the Honda humanoid robot (Fig …
An incremental multimodal realizer for behavior co-articulation and coordination
H Van Welbergen, D Reidsma, S Kopp – International Conference on …, 2012 – Springer
… The ACE (Articulated Communicator Engine) [4] realizer was the first be- havior generation system that simulated the mutual adaptations between the timing of gesture and speech that humans employ to achieve synchrony between co-expressive elements in those two …
Generating multi-modal robot behavior based on a virtual agent framework
M Salem, S Kopp, I Wachsmuth… – Proceedings of the ICRA …, 2010 – pub.uni-bielefeld.de
… used for the virtual human Max [2]. Being one of the most sophisticated multi-modal schedulers, the Articulated Communicator Engine (ACE) has replaced the use of lexicons of canned behaviors with an on-the- spot production of flexibly planned behavior representations …
A friendly gesture: Investigating the effect of multimodal robot behavior in human-robot interaction
M Salem, K Rohlfing, S Kopp, F Joublin – 2011 Ro-Man, 2011 – ieeexplore.ieee.org
… In particular, we build on the Articulated Communicator Engine (ACE), which is one of the most sophisticated multimodal schedulers and behavior realizers by replacing the use of lexicons of canned behaviors with an on-line production of flexibly planned be- havior …
From Communicators to Resonators
S Kopp – Citeseer
… combinatorics ACE – Articulated Communicator Engine Microplanning Surface Realization Action selection & content planning BML FML From communicators to resonators Stefan Kopp Modeling speech & gesture thinking Linguistic …
Generation and Analysis of Communicative Robot Gesture
M Salem, S Kopp, I Wachsmuth, K Rohlfing, F Joublin – academia.edu
… The Articulated Communicator Engine (ACE) implements the speech-gesture produc- tion model that was originally designed for the virtual human agent Max and is now used as the underlying action generation framework for the Honda humanoid robot (Fig …
A virtual agent as vocabulary trainer: iconic gestures help to improve learners’ memory performance
K Bergmann, M Macedonia – International workshop on intelligent virtual …, 2013 – Springer
… They were specified in the Multimodal Utterance Representation Markup Language (MURML; [7]) and realized with the Articulated Communicator Engine (ACE; [5]), a toolkit for building animated embodied agents that are able to generate human-like multimodal utterances …
The rise of the conversational interface: A new kid on the block?
MF McTear – international workshop on future and emerging trends …, 2016 – Springer
… See [5] for descriptions of many of these standards. Toolkits, for example, the Virtual Human Toolkit 20 and ACE (Articulated Communicator Engine). 21. For more detailed descriptions of ECAs, see [14], especially Chaps. 13–16. 3.4 Chatbots …
A controller-based animation system for synchronizing and realizing human-like conversational behaviors
A ?erekovi?, T Pejša, IS Pandži? – Development of Multimodal Interfaces …, 2010 – Springer
… Two BML-compliant animation engines have been developed so far – ACE and SmartBody [6]. Articulated Communicator Engine (ACE) developed from an earlier engine based on multimodal utterance representation markup language (MURML) [15] …
Social resonance and embodied coordination in face-to-face conversation with artificial interlocutors
S Kopp – Speech Communication, 2010 – Elsevier
… Our resonance-based gesture perception model is shown in Fig. 3. It is built atop a computational model of gesture motor control for humanoid agents, part of the “Articulated Communicator Engine” (ACE) (Kopp and Wachsmuth, 2004) …
Preprint: Investigating the Effect of Robot Gesture
M Salem, S Kopp, K Rohlfing, F Joublin – honda-ri.de
… In particular, we build on the Articulated Communicator Engine (ACE), which is one of the most sophisticated multimodal schedulers and behavior realizers by replacing the use of lexicons of canned behaviors with an on-line production of flexibly planned be- havior …
Standardized Prototyping and Development of Virtual Agents
AS Hill, J Cassell – 2008 – Citeseer
… The NUMACK project utilized an underlying motion engine called the Articulated Communicator Engine (ACE) [6]. The ACE system is capable of aligning speech and gesture as proposed by the BML specification. This technology …
Exploring the alignment space–lexical and gestural alignment with real and virtual humans
K Bergmann, HP Branigan, S Kopp – Frontiers in ICT, 2015 – frontiersin.org
… as a between-subjects variables. In the VH conditions, we employed the virtual character “Billie” with the Articulated Communicator Engine [ACE; Kopp and Wachsmuth (2004)] for facial animation. To synthesize the virtual human’s …
Closing the loop: Towards tightly synchronized robot gesture and speech
M Salem, S Kopp, F Joublin – International Conference on Social Robotics, 2013 – Springer
… Complementing these approaches, Kopp and Wachsmuth [8] introduced the Articulated Communicator Engine (ACE) as the first multimodal behavior real- izer for virtual agents that provides for mutual adaptation mechanisms between speech and gesture …
Surface Realization of Multimodal Output from XML representations in MURML
S Kopp – Invited Workshop on Representations for …, 2005 – techfak.uni-bielefeld.de
… accompanying nonverbal behaviors. The final realization step is accomplished by the Articulated Communicator Engine (ACE, for short), which allows to create and visualize animated multimodal agents. In the next sections …
Incremental multimodal feedback for conversational agents
S Kopp, T Stocksmeier, D Gibbon – International Workshop on Intelligent …, 2007 – Springer
… to verbal feedback requests from the planner. The feedback generator operates a number of modality-specific generators, realized in the Articulated Communicator Engine (ACE) [11]. To be able to realize verbal backchannels …
Describing and animating complex communicative verbal and nonverbal behavior using Eva-framework
I Mlakar, Z Ka?i?, M Rojc – Applied artificial intelligence, 2014 – Taylor & Francis
… The articulated communicator engine (ACE; Kopp and Wachsmuth 2004) is a behavior-realization engine that allows for the modeling of virtual animated agents using the gesture description language Multimodal Utterance Representation Markup Language (MURML) …
Affective educational games: Utilizing emotions in game-based learning
P Wilkinson – 2013 5th International Conference on Games and …, 2013 – ieeexplore.ieee.org
… Regarding emotional modelling in synthetic agents, the Articulated Communicator Engine (ACE) is a toolkit for modelling emotional expression and is based on the Multimodal Utterance Representation Markup Language (MURML) [69] …
A bizarre virtual trainer outperforms a human trainer in foreign language word learning
M Macedonia – International Journal of Computer Science and Artificial …, 2014 – pure.mpg.de
… The agent’s gestures were specified in the Multimodal Utterance Representation Markup Language (MURML) [34] and realized by the Articulated Communicator Engine (ACE) [35], a toolkit for making animated embodied agents that are capable of generating human-like …
Multimodal behavior realization for embodied conversational agents
A ?erekovi?, IS Pandži? – Multimedia Tools and Applications, 2011 – Springer
… So far three BML-compliant animation engines have been developed—ACE, SmartBody [51] and EMBR system [22]. Articulated Communicator Engine (ACE) developed from an earlier engine based on multimodal utterance representation markup language (MURML) [29] …
Exploring the alignment space–lexical and gestural alignment with real and virtual humans
KBHP Branigan, S Kopp – 2015 – researchgate.net
… a between-subjects variables. In the VH conditions we employed the virtual character ‘Billie’ with the Articulated Communicator Engine (ACE; Kopp and Wachsmuth (2004)) for facial animation. To synthesize the virtual human’s …
TTS-driven synthetic behavior generation model for embodied conversational agents
I Mlakar, Z Kacic, M Rojc – Coverbal Synchrony in Human …, 2013 – books.google.com
… For instance, the Articulated Communicator Engine (ACE)(Kopp and Wachsmuth, 2004) is a behavior-realization engine that allows for the modeling of virtual animated agents using the MURML gesture description language (Kransted et al., 2002) …
Conversational interfaces: devices, wearables, virtual agents, and robots
M McTear, Z Callejas, D Griol – The Conversational Interface, 2016 – Springer
… MAX and the Articulated Communicator Engine (ACE) . 34 ACE is a toolkit for building ECAs with a kinematic body model and multimodal utterance generation based on MURML. MAX is an ECA developed for cooperative construction …
Conceptual Motorics-Generation and Evaluation of Communicative Robot Gesture
M Salem – 2013 – books.google.com
… 83 4.2.1 Articulated Communicator Engine (ACE) . . . . 84 4.2.2 Whole Body Motion (WBM) … 3.4 Architecture of the Gesture Engine of the GRETA system 3.5 Architecture of the Articulated Communicator Engine (ACE) of the MAX system …
A New Distributed Platform For Client-Side Fusion Of Web Applications And Natural Modalities—A Multimodal Web Platform
I Mlakar, M Rojc – Applied artificial intelligence, 2013 – Taylor & Francis
… Several software frameworks support the development of embodied conversational agents (ECAs), for instance, the Articulated Communicator Engine (ACE; Kopp and Wachsmuth 2004), the Behavior Expression Animation Toolkit (BEAT; Cassell, Vilhjálmsson, and Bickmore …
Co-expressivity of speech and gesture: Lessons for models of aligned speech and gesture production
K Bergmann, S Kopp – Symposium at the AISB Annual Convention …, 2007 – researchgate.net
… 5.2 Multimodal synchronization In other previous work, we have developed the virtual human Max (see figure 2) based on the Articulated Communicator Engine (ACE, for short) for behavior realization [16]. ACE allows to create …
The Embodied Conversational Agent Toolkit: A new modularization approach.
RJ Werf – 2008 – essay.utwente.nl
… From the University of Bielefeld I would like to thank Ipke Wachsmuth for giving his consent to use the Articulated Communicator Engine (ACE). Especially I want to thank Stefan Kopp for his guidance and support on ACE and for letting me visit the University of Bielefeld …
Implementing a non-modular theory of language production in an embodied conversational agent
T Sowa, S Kopp, S Duncan, D McNeill… – … in humans and …, 2008 – books.google.com
… 2004), two ECAs that embody models of speech and gesture production. these behaviors and the temporal relations between them. In the virtual human Max, the “Articulated Communicator Engine”(ACE, for short) is employed for this task …
Multimodal content representation for speech and gesture production
K Bergmann, S Kopp – Proceedings of the 2nd Workshop on Multimodal …, 2008 – Citeseer
… At last, Motor Control and Phonation are concerned with the concrete realization of speech and hand/arm movements for our virtual human Max employ- ing the Articulated Communicator Engine (ACE, for short) for be- havior realization [26] …
SmilieFace: an innovative affective messaging application to enhance social networking
D McMeekin – 2013 – espace.curtin.edu.au
Page 1. Department of Computing SmilieFace: An Innovative Affective Messaging Application to Enhance Social Networking Hengky This thesis is presented for the Degree of Master of Philosophy (Computer Science) of Curtin University July 2013 Page 2 …