Notes:
A dialogue system, also known as a conversational agent or chatbot, is a computer program designed to communicate with humans through natural language. A dialog system allows users to interact with the system through text or voice input and receives responses in the same format.
A multimodal dialog system, on the other hand, is a dialog system that can accept multiple forms of input and output, such as text, voice, and visual elements like images or videos. Multimodal dialog systems are designed to improve the user experience by allowing users to interact with the system in a more natural and intuitive way, using a variety of input and output modalities.
For example, a multimodal dialog system might allow users to ask a question using voice input, and receive a response in the form of a text message, an image, or a video. This can make the interaction feel more natural and engaging for the user.
A multimodal agent is a computer program that can communicate with humans using multiple modalities, such as text, voice, images, and video. A multimodal agent is designed to improve the user experience by allowing users to interact with the system in a more natural and intuitive way, using a variety of input and output modalities.
Multimodal dialog systems are a type of multimodal agent that are specifically designed for dialogue and conversation with humans. They allow users to interact with the system through text or voice input, and can use a variety of output modalities to respond to the user, such as text, images, or video.
Multimodal agents can be used in a variety of applications, such as customer service, virtual assistants, and educational tools. They can also be integrated into a range of devices, such as smartphones, smart speakers, and smart home devices.
See also:
Preliminary Study of Multi-modal Dialogue System for Personal Robot with IoTs
S Yamasaki, K Matsui – … on Distributed Computing and Artificial Intelligence, 2017 – Springer
Personal robot is a new technological solution where various IoT machines collaborate and provide new type of services. In this research, we develop a cloud-network based personal robot platform technologies, which offer advanced IoT services through its cost-effective …
Observing, coaching and reflecting: a multi-modal natural language-based dialogue system in a learning context
JV Helvert, PV Rosmalen, D Börner… – … Proceedings of the …, 2015 – books.google.com
… Natural conversational interaction, mixed-reality, multi-modal dialogue systems, immersive, debate skills, learning analytics, reflection 1. Introduction As we move towards a world of smart and immersive environments we are seeking new ways of interfacing and engaging with …
Combining chat and task-based multimodal dialogue for more engaging HRI: A scalable method using reinforcement learning
I Papaioannou, O Lemon – Proceedings of the Companion of the 2017 …, 2017 – dl.acm.org
… [2] A. Dingli and D. Scerri. Building a hybrid: Chatterbot – dialog system. Text, Speech, and Dialogue, pages 145–152, 2013. [3] O. Lemon, A. Bracy, A. Gruenstein, and S. Peters. A multi-modal dialogue system for human-robot conversation. In Proc. NAACL. 2001 …
Evaluating remote and head-worn eye trackers in multi-modal speech-based HRI
M Barz, P Poller, D Sonntag – Proceedings of the Companion of the 2017 …, 2017 – dl.acm.org
… In this work, we present the proto- type of a gaze-supported multi-modal dialogue system that enhances two core tasks in human-robot collaboration: 1) our robot is able to learn new objects and their location from user instructions involving gaze, and 2) it can instruct the user to …
Situated language understanding for a spoken dialog system within vehicles
T Misu, A Raux, R Gupta, I Lane – Computer Speech & Language, 2015 – Elsevier
Skip to main content …
Incremental Generation of Visually Grounded Language in Situated Dialogue (demonstration system)
Y Yu, A Eshghi, O Lemon – Proceedings of the 9th International Natural …, 2016 – aclweb.org
… uk Oliver Lemon Interaction Lab Heriot-Watt University o.lemon@hw.ac.uk We present a multi-modal dialogue system for interactive learning of perceptually grounded word meanings from a human tutor (Yu et al., ). The system …
Multi-modal interaction for UAS control
G Taylor, B Purman, P Schermerhorn… – Unmanned Systems …, 2015 – spiedigitallibrary.org
… simulated environment to provide situation awareness. While speech has been used experimentally with VSCS, it did not have integrated sketch, nor did it have a multi-modal dialogue system included. We integrated SID as a …
Training an adaptive dialogue policy for interactive learning of visually grounded word meanings
Y Yu, A Eshghi, O Lemon – arXiv preprint arXiv:1709.10426, 2017 – arxiv.org
… 7 Conclusion and Future work We have presented a multi-modal dialogue system that learns grounded word meanings from a hu- man tutor, incrementally, over time, and employs a dynamic dialogue policy (optimised using Rein- forcement Learning) …
Natural, Multi-modal Interfaces for Unmanned Systems
G Taylor – … Conference on Engineering Psychology and Cognitive …, 2017 – Springer
… In: Decalog 2007: Proceedings of the 11th Workshop on the Semantics and Pragmatics of Dialogue, Rovereto, Italy (2007)Google Scholar. 10. Lemon, O., et al.: A multi-modal dialogue system for human-robot conversation. In: NAACL (2001)Google Scholar. 11 …
A modular, multimodal open-source virtual interviewer dialog agent
K Cofino, V Ramanarayanan, P Lange… – Proceedings of the 19th …, 2017 – dl.acm.org
… behavior. 3 CONCLUSIONS We have presented an open-source virtual agent that can seamlessly interface with an existing open-source modular cloud-based multi- modal dialog system to create immersive interactive experiences …
Engineering interactive systems with SCXML
S Radomski, D Schnelle-Walka, D Dahl… – Proceedings of the 8th …, 2016 – dl.acm.org
… Our goal is to attract a wide range of submissions related to the declarative modeling of interactive multi-modal dialog systems to leverage the discussion and thus to advance the research of modeling interactive multi-modal dialog systems …
Adaptive dialogue management in the kristina project for multicultural health care applications
L Pragst, S Ultes, M Kraus… – Proceedings of the …, 2015 – pubman.mpdl.mpg.de
… patients as well as supervision personnel. To provide natural communication, the KRISTINA agent will be designed as a multi- modal dialogue system having a dialogue manager (DM) at its core. Establishing the described kind of …
Flexible, Rapid Authoring of Goal-Orientated, Multi-Turn Dialogues Using the Task Completion Platform.
A Marin, PA Crook, OZ Khan, V Radostev… – …, 2016 – isca-speech.org
… khushboa,rusarika@microsoft.com Abstract The Task Completion Platform (TCP) is a multi-domain, multi- modal dialogue system that can host and execute large numbers of goal-orientated dialogue tasks. TCP is comprised of …
A corpus for experimental study of affect bursts in human-robot interaction
L Bechade, K El Haddad, J Bourquin… – Proceedings of the 1st …, 2017 – dl.acm.org
… out in the framework of the Joker Project which provides a multi- modal dialog system with social skills as humor and empathy [2]. We expect the affect bursts to either communicate a message as a stand-alone expression, or to complete the linguistic content of a sentence thus …
2. History of Conversational Systems
T Nishida – 2017 – ii.ist.i.kyoto-u.ac.jp
… t 1990 2000 2010 1980 1970 Natural language dialogue systems Speech dialogue systems Multi?modal dialogue systems Embodied Conversational Agents / Intelligent Virtual Human Story Understanding systems Conversational Systems Transactional systems …
Learning how to learn: an adaptive dialogue agent for incrementally learning visually grounded word meanings
Y Yu, A Eshghi, O Lemon – arXiv preprint arXiv:1709.10423, 2017 – arxiv.org
… 1). In order to cope with the first problem, recent prior work (Yu et al., 2016b,c) has built multi- modal dialogue systems to investigate the effects of different dialogue strategies and capabilities on the overall learning performance …
RubyStar: A Non-Task-Oriented Mixture Model Dialog System
H Liu, T Lin, H Sun, W Lin, CW Chang, T Zhong… – arXiv preprint arXiv …, 2017 – arxiv.org
… arXiv preprint arXiv:1603.06155, 2016a. Zhou Yu, Alexandros Papangelis, and Alexander Rudnicky. Ticktock: A non-goal-oriented multi- modal dialog system with engagement awareness. In Proceedings of the AAAI Spring Symposium, 2015. 9 Page 10 …
Augmented robotics dialog system for enhancing human–robot interaction
F Alonso-Martín, A Castro-González, FJFG Luengo… – Sensors, 2015 – mdpi.com
Augmented reality, augmented television and second screen are cutting edge technologies that provide end users extra and enhanced information related to certain events in real time. This enriched information helps users better understand such events, at the same time providing …
Natural interaction for unmanned systems
G Taylor, B Purman, P Schermerhorn… – Unmanned Systems …, 2015 – spiedigitallibrary.org
SPIE Digital Library Proceedings.
The hri-cmu corpus of situated in-car interactions
D Cohen, A Chandrashekaran, I Lane… – Situated Dialog in Speech …, 2016 – Springer
… Scholar. 15. Misu T et al (2013) Situated multi-modal dialog system in vehicles. In: Proceeding of ICMIGoogle Scholar. 16. Möller S, Gödde F, Wolters M (2008) A corpus analysis of spoken smart-home interactions with older users …
Machine Learning and Social Robotics for Detecting Early Signs of Dementia
P Jonell, J Mendelson, T Storskog, G Hagman… – arXiv preprint arXiv …, 2017 – arxiv.org
… projection and/or the mask. This feature will be exploited in the participatory design study (Section 5.2). The Furhat robot includes a multi-modal dialog system, implemented using the IrisTK framework [26]. It uses statecharts as a …
A Clinical Evaluation of Human Computer Interaction Using Multi Modal Fusion Techniques
N Devi, KS Easwarakumar – Journal of Medical Imaging and …, 2017 – ingentaconnect.com
Page 1. Delivered by Ingenta to: ? IP: 5.10.31.211 On: Thu, 09 Nov 2017 05:52:33 Copyright: American Scientific Publishers Copyright © 2017 American Scientific Publishers All rights reserved Printed in the United States of America RESEARCH ARTICLE …
Interactively learning visually grounded word meanings from a human tutor
Y Yu, A Eshghi, O Lemon – Proceedings of the 5th Workshop on Vision …, 2016 – aclweb.org
… so better overall performance. 6 Conclusion and Future work We have presented a multi-modal dialogue system that learns grounded word meanings from a hu- man tutor, incrementally, over time. The system integrates a semantic …
Companion-technology: an overview
S Biundo, D Höller, B Schattenberg, P Bercher – KI-Künstliche Intelligenz, 2016 – Springer
… Focused on multi-modal dialog systems, the SEMAINE (Sustained emotionally colored machine-human interaction using non-verbal expression) FP7 initiative [29, 83] followed the vision to provide these systems with a technology such that a human user is able to engage with …
Metalogue: A Multimodal Learning Journey
D Koryzis, V Svolopoulos… – Proceedings of the 9th …, 2016 – dl.acm.org
… ABSTRACT In this paper, we present a high-level description of the Metalogue system that develops a multi-modal dialogue system that is able to implement interactive behavior between a virtual agent and a learner outlining the insight to the development of a fully- integrated …
Welcome to EICS 2015
J Ziegler, M Nebeling, L Nigay, JC Campos… – 2015 – repositorium.sdum.uminho.pt
… device interfaces. Another workshop is specifically dedicated to the potential of the recently proposed W3C standard SCXML on event-driven state machines for the development of multi-modal dialog systems. The selection …
Creating a virtual neighbor
C Corbin, F Morbini, D Traum – Natural Language Dialog Systems and …, 2015 – Springer
… In: Proceedings of Eurospeech, pp 1327–1330Google Scholar. Yoshioka O, Minami Y, Shikano K (1994) A multi-modal dialogue system for telephone directory assistance. In: International conference on spoken language processing (ICSLP’94)Google Scholar …
Voice Controlled Multi-robot System for Collaborative Task Achievement
C Ouali, MM Nasr, MAM AbdelGalil, F Karray – … Conference on Robot …, 2017 – Springer
… pp. 25–27 (2001)Google Scholar. 16. Toptsis, I., Li, S., Wrede, B., Fink, GA: A multi-modal dialog system for a mobile robot. In: Eighth International Conference on Spoken Language Processing (2004)Google Scholar. 17. Drygajlo …
The effect of a physical robot on vocabulary learning
A Wedenborn, P Wik, O Engwall… – Proc. Int. Work. Spok …, 2016 – speech.kth.se
… other aspects of the interaction was autonomous. The application was built using the IrisTK framework [4], which is a Java-based framework for constructing multi-modal dialogue systems. IrisTK provides an API for developers …
Semantic Grounding in Dialogue for Complex Problem Solving
X Li, K Boyer – Proceedings of the 2015 Conference of the North …, 2015 – aclweb.org
Page 1. Human Language Technologies: The 2015 Annual Conference of the North American Chapter of the ACL, pages 841–850, Denver, Colorado, May 31 – June 5, 2015. cO2015 Association for Computational Linguistics …
Addressee and Response Selection in Multi-Party Conversations with Speaker Interaction RNNs
R Zhang, H Lee, L Polymenakos, D Radev – arXiv preprint arXiv …, 2017 – arxiv.org
… Since the data is text based, they use only textual information to predict addressees as opposed to relying on acoustic signals or gaze information in multi- modal dialog systems (Jovanovic, Akker, and Nijholt 2006; Akker and Traum 2009) …
A conversational dialogue manager for the humanoid robot erica
P Milhorat, D Lala, K Inoue, T Zhao… – Proceedings of …, 2017 – sap.ist.i.kyoto-u.ac.jp
… AI Magazine 32(2), 42–56 (2011) 16. Misu, T., Raux, A., Lane, I., Devassy, J., Gupta, R.: Situated multi-modal dialog system in vehicles. In: Proceedings of the 6th workshop on Eye gaze in intelligent human machine interaction: gaze in multimodal interaction, pp. 7–9 (2013) 17 …
Probabilistic features for connecting eye gaze to spoken language understanding
A Prokofieva, M Slaney… – Acoustics, Speech and …, 2015 – ieeexplore.ieee.org
… 2013. [10] T. Misu, A. Raux, I. Lane, J. Devassy, and R. Gupta, “Situated multi-modal dialog system in vehicles,” in Proceedings of the 6th ACM workshop on Eye gaze in intelligent human machine interaction, 2013, pp. 25–28 …
Modelling Multi-issue Bargaining Dialogues: Data Collection, Annotation Design and Corpus.
V Petukhova, CA Stevens, H de Weerd, N Taatgen… – LREC, 2016 – researchgate.net
Page 1. Modelling Multi-Issue Bargaining Dialogues: Data Collection, Annotation Design and Corpus Volha Petukhova1, Christopher Stevens2, Harmen de Weerd2, Niels Taatgen2, Fokie Cnossen2, Andrei Malchanau1 1Saarland …
Unknown Word Detection Based on Event-Related Brain Desynchronization Responses
T Sasakura, S Sakti, G Neubig, T Toda… – Natural Language Dialog …, 2015 – Springer
… EEG-ERD). Future work includes improvement of the performance of the classifier, experiments in an environment like a real conversation, and application to a multi-modal dialog system. Notes. Acknowledgements. Part of this …
The future of human robot teams in the army: factors affecting a model of human-system dialogue towards greater team collaboration
AW Evans, M Marge, E Stump, G Warnell… – Advances in Human …, 2017 – Springer
… In: AAAI Fall Symposium: Dialog with Robots, Nov 2010Google Scholar. 24. Lemon, O., Bracy, A., Gruenstein, A., Peters, S.: The WITAS multi-modal dialogue system I. In: INTERSPEECH, pp. 1559–1562, Sept 2001Google Scholar. 25 …
Architecture and representation for handling dialogues in human-robot interactions
E Retamino, S Nair, A Vijayalingam… – Signal and Information …, 2015 – ieeexplore.ieee.org
… [4] T. Misu, A. Raux, I. Lane, J. Devassy, and R. Gupta, “Situated multi- modal dialog system in vehicles,” in Proceedings of the 6th Workshop on Eye Gaze in Intelligent Human Machine Interaction: Gaze in Multimodal Interaction, GazeIn ’13, (New York, NY, USA), pp …
Next-generation of virtual personal assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Home)
V Këpuska, G Bohouta – Computing and Communication …, 2018 – ieeexplore.ieee.org
… various kinds of Virtual Personal Assistants(VPAs) based on their applications and areas, such as Microsoft’s Cortana, Apple’s Siri, Amazon Alexa, Google Assistant, and Facebook’s M. However, in this proposal, we have used the multi-modal dialogue systems which process …
Speech-based home automation system
E Fytrakis, I Georgoulas, J Part, Y Zhu – Proceedings of the 2015 British …, 2015 – dl.acm.org
… home. In the following, we review some of these approaches. In 2010, Gatica-Perez et al. [1] presented a multi-modal dialogue system consisting of speech and a graphical interface to interact with home appliances. One interesting …
The Future of Human-Robot Spoken Dialogue: from Information Services to Virtual Assistants NII Shonan Meeting Report
RE Banchs, S Sakti, E Mizukami – 2015 – shonan.nii.ac.jp
… capabilities. Turn-taking and attention in human-robot dialogue Gabriel Skantze, KTH At KTH we have been doing research on multi-modal dialogue systems for a long time, often based on observations of human-human dialogue. In …
Smarter Driving with IDA, the Intelligent Driving Assistant for Singapore
AI Niculescu, NTH Thai, C Ni, BP Lim… – … Annual Conference of …, 2015 – isca-speech.org
… waste for all drivers on the road. Index Terms: multi-modal dialogue system, speech recognition, natural language understanding, smart parking application, interaction design 1. Introduction Despite being ranked top for smooth …
Driver confusion status detection using recurrent neural networks
C Hori, S Watanabe, T Hori… – Multimedia and Expo …, 2016 – ieeexplore.ieee.org
… [2] Teruhisa Misu, Antoine Raux, Ian Lane, Joan Devassy, and Rakesh Gupta, “Situated multi-modal dialog system in vehicles,” in Proceedings of the 6th workshop on Eye gaze in intelligent human machine interaction: gaze in multimodal interaction. ACM, 2013, pp. 25–28 …
Elderly Speech-Gaze Interaction
C Acartürk, J Freitas, M Fal, MS Dias – International Conference on …, 2015 – Springer
Elderly people face problems when using current forms of Human-Computer Interaction (HCI). Developing novel and natural methods of interaction would facilitate resolving some of those issues. We propo.
Robot-audition-based human-machine interface for a car
K Nakadai, T Mizumoto… – Intelligent Robots and …, 2015 – ieeexplore.ieee.org
… environment. In addition, as an integrated framework, HARK- Dialog was developed to build a multi-party and multi-modal dialog system, enabling the seamless use of cloud and local services with pluggable modular architecture …
Chatbots’ Greetings to Human-Computer Communication
MJ Pereira, L Coheur, P Fialho, R Ribeiro – arXiv preprint arXiv …, 2016 – arxiv.org
… It should be clear that we exclude from this survey, authoring platforms such as the IrisTK26, the Visual SceneMaker (Gebhard et al., 2011), or the Virtual Human Toolkit27 (Hartholt et al., 2013), as these target multi-modal dialogue systems and not chatbots, as defined in the …
Socrob@ home: Team description paper for the competition event rockin@ home 2015
A Mateus, J Mendes, J ONeill, M Braga… – … Tcnico, Tech. Rep., 2015 – rockincompetition.eu
… Our multi-modal dialogue system is based on a FSM that coordinates the emission of canned sentences to both the TTS and the GUI, where the transitions depend on the user response (either through the ASR or the GUI). All user responses are explicitly confirmed by the robot …
Toward Designing a Realistic Conversational System: A Survey.
AM Ali, AJ Gonzalez – FLAIRS Conference, 2016 – aaai.org
… There- fore, combining both models did not result in improving the results. Another system known as SARA was introduced by Niculescu et al. (2014) as a multi-modal dialogue system that provides touristic information as a mobile application …
Biometric-enabled authentication machines: A survey of open-set real-world applications
SC Eastwood, VP Shmerko… – … on Human-Machine …, 2016 – ieeexplore.ieee.org
Page 1. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, VOL. 46, NO. 2, APRIL 2016 231 Biometric-Enabled Authentication Machines: A Survey of Open-Set Real-World Applications Shawn C. Eastwood, VP Shmerko …
Do you see what I see: towards a gaze-based surroundings query processing system
S Kang, B Kim, S Han, H Kim – … of the 7th International Conference on …, 2015 – dl.acm.org
… 8. T. Misu, A. Raux, I. Lane, J. Devassy, and R. Gupta. 2013. Situated Multi-modal Dialog System in Vehicles. In Proceedings of the 6th Workshop on Eye Gaze in Intelligent Human Machine Interaction (GazeIn). 9. J. Oh and N. Kwak. 2012 …
Design and development of multimodal applications: a vision on key issues and methods
S Silva, N Almeida, C Pereira, AI Martins… – … on Universal Access in …, 2015 – Springer
Multimodal user interfaces provide users with different ways of interacting with applications. This has advantages both in providing interaction solutions with additional robustness in environments wh.
The Willful Marionette: Exploring Responses to Embodied Interaction
K Grace, S Grace, ML Maher, MJ Mahzoon… – Proceedings of the …, 2017 – dl.acm.org
Page 1. The Willful Marionette: Exploring Responses to Embodied Interaction Kazjon Grace The University of Sydney and UNC Charlotte Sydney, Australia and Charlotte, USA Stephanie Grace Bloomberg LP New York, USA …
Context-Oriented Software Adaptation–A Brief State of the Art
K Mens, A Cleve, B Dumas – 2015 – researchgate.net
… 32-36, 2010. [AUI23] Bühler, D., Minker, W., Häussler, J., and Krüger, S. The SmartKom Mobile Multi-Modal Dialogue System. In Proceedings of ISCA Tutorial and Research Workshop (ITRW) on Multi-Modal Dialogue in Mobile Environments (Irsee, Germany), 2002 …
Military Usages of Speech and Language Technologies: A Review.
D Griol, JG Herrero, JM Molina – Meeting Security Challenges …, 2016 – books.google.com
Page 56. Meeting Security Challenges Through Data Analytics and Decision Support E. Shahbazian and G. Rogova (Eds.) 44 IOS Press, 2016 © 2016 The authors and IOS Press. All rights reserved. doi: 10.3233978-1-61499 …
Institute of Communications Engineering Staff
M Bossert, R Fischer, W Minker, UC Fiebig… – Proceedings of the …, 2017 – uni-ulm.de
Virtual coaches for healthy lifestyle
HJA op den Akker, R Klaassen, A Nijholt – Toward Robotic Socially …, 2016 – Springer
… 24]. The animated embodiment is often just the visual appearance of an embodied conversational agent (ECA) [21]. Under the hood the ECA is a more or less sophisticated (spoken or multi-modal) dialogue system. Cassell …
Object Referring in Videos with Language and Human Gaze
AB Vasudevan, D Dai, L Van Gool – arXiv preprint arXiv:1801.01582, 2018 – arxiv.org
Page 1. Object Referring in Videos with Language and Human Gaze Arun Balajee Vasudevan1, Dengxin Dai1, Luc Van Gool1,2 ETH Zurich1 KU Leuven 2 {arunv,dai,vangool}@vision.ee.ethz.ch Abstract We investigate the …
Probabilistic record type lattices for incremental reference processing
J Hough, M Purver – Modern perspectives in type-theoretical semantics, 2017 – Springer
… Dobnik et al. 2013), demonstrating its potential for situated, embodied and multi-modal dialogue systems. The possibility of integration of perceptron learning (Larsson 2011) and Naive Bayes learning (Cooper et al. 2014) into …
Towards an Empathic Social Robot for Ambient Assisted Living.
BN De Carolis, S Ferilli, G Palestra… – ESSEM …, 2015 – pdfs.semanticscholar.org
… In Proceedings of the 4th IEEE tutorial and research workshop on Perception and Interactive Technologies for Speech-Based Systems: Perception in Multi- modal Dialogue Systems (PIT’08), Springer-Verlag, Berlin, 188-199. 34 …
A corpus of read and conversational Austrian German
B Schuppler, M Hagmüller, A Zahrer – Speech Communication, 2017 – Elsevier
… Finally, most of the conversations (14 out of 19) were recorded with a video camera (Canon Legria HF M31 HD Camcorder). These recordings might be used in the future for the study of gestures, which is relevant for the development of multi-modal dialogue systems …
Multimodal Interactions with a Chatbot and study of Interruption Recovery in Conversation.
F Chabernaud – 2017 – macs.hw.ac.uk
Page 1. Heriot-Watt University Masters Thesis Multimodal Interactions with a Chatbot and study of Interruption Recovery in Conversation. Author: Fantine Chabernaud Supervisor: Dr. Franck Broz A thesis submitted in fulfillment of the requirements for the degree of MSc …
Agentní p?ístup k dialogovému ?ízení
T Nestorovi? – 2016 – otik.uk.zcu.cz
… complex utterances. This thesis, however, does not concern with multi-modal dialogue systems and interaction.1 The final step in designing a dialogue system is to interconnect the modules with a communication channel. In …
Multi-UAV ground control station for gliding aircraft
JC del Arco, D Alejo, BC Arrue… – … (MED), 2015 23th …, 2015 – ieeexplore.ieee.org
Page 1. Multi-UAV ground control station for gliding aircraft. JC del Arco1, D. Alejo1, BC Arrue1, JA Cobano1, G. Heredia1, A. Ollero1 Abstract—This paper presents a Ground Control Station based on the Robot Operating System …
Introduction to statistical spoken dialogue systems
M Gašic – 2016 – cl.cam.ac.uk
… Speech does not need to be the only input. We can interact with machines also using touch, gesture or facial expressions and these are multi-modal dialogue systems. 3 / 32 Page 4. Examples from popular culture 4 / 32 Page 5. Dialogue and AI …
The Ethnobot: Gathering Ethnographies in the Age of IoT
E Tallyn, H Fried, R Gianni, A Isard… – Proceedings of the 2018 …, 2018 – dl.acm.org
Page 1. The Ethnobot: Gathering Ethnographies in the Age of IoT Ella Tallyn e.tallyn@ed.ac.uk Hector Fried h.fried@ed.ac.uk Rory Gianni r.gianni@ed.ac.uk Amy Isard amyi@inf.ed.ac.uk Chris Speed c.speed@ed.ac.uk Design …
SWRL 130 tacit knowledge 298 Takagi-Sugeno 96 task 244 teacher professional development 197
O SoundS – op Pro ernatio ent En, 2015 – books.google.com
… classroom Markov decision processes mathematics medical education message organization message sharing mixed reality 220, mobile devices mobile environments mobile learning 150, MOOC multi-agent systems multi-modal dialogue systems multitasking multiuser virtual …
Input Forager: a user-driven interaction adaptation approach for head worn displays
M Al-Sada, F Ishizawa, J Tsurukawa… – Proceedings of the 15th …, 2016 – dl.acm.org
… and services (pp. 127-136). ACM. 20. B¨uhler, D., Minker, W., H¨aussler, J., and Kr¨uger, S. The SmartKom Mobile Multi-Modal Dialogue System. In Proceedings of ITRW 2002 (Irsee, Germany, June 2002). 21. David, L., Endler …
Implementation of image processing based Digital Dactylology Converser for deaf-mute persons
MY Javed, MM Gulzar, STH Rizvi… – Intelligent Systems …, 2016 – ieeexplore.ieee.org
… [4] W. Gao, J. Ma, S. Shan, X. Chen, W. Zeng, H. Zhang, 1. Yan, and J. Wang, “HandTalker: A multi modal dialog system using sign language and 3-D virtual human,” Advances in Multimodal Inteifaces (ICMI 2000), Third International Conference, Beijing, China, Oct 14-16, vol …
Situation-and user-adaptive dialogue management
G Bertrand – 2015 – oparu.uni-ulm.de
Page 1. Situation- and User-Adaptive Dialogue Management Gregor Bertrand geboren in Ravensburg Institute of Communications Engineering: Dialogue Systems University of Ulm Dissertation zur Erlangung des Doktorgrades Dr.rer.nat …
Sound synthesis for communicating nonverbal expressive cues
FA Martín, Á Castro-González, MÁ Salichs – IEEE Access, 2017 – ieeexplore.ieee.org
Page 1. Received November 24, 2016, accepted December 29, 2016, date of publication February 1, 2017, date of current version March 13, 2017. Digital Object Identifier 10.1109/ACCESS. 2017.2658726 Sound Synthesis for Communicating Nonverbal Expressive Cues …
Detecting and Adapting Conversational Agent Strategy to User’s Emotions in Video Games
B Cagnol – 2017 – macs.hw.ac.uk
Page 1. Heriot-Watt University Research Report Detecting and Adapting Conversational Agent Strategy to User’s Emotions in Video Games Author: Brice Cagnol Supervisor: Prof. Oliver Lemon Second Reader: Prof. Frank Broz A thesis submitted in fulfilment of the requirements …
Towards Dialogue Strategies for Cognitive Workload Man
J Villing – Accident Analysis and Prevention, 2015 – researchgate.net
Page 1. Towards Dialogue Strategies for Cognitive Workload Management Jessica Villing The Graduate School of Language Technology Page 2. Towards Dialogue Strategies for Cognitive Workload Management Page 3. Towards Dialogue Strategies for Cognitive Workload …
“Let’s save resources!”: A dynamic, collaborative AI for a multiplayer environmental awareness game
P Sequeira, FS Melo, A Paiva – Computational Intelligence and …, 2015 – ieeexplore.ieee.org
Page 1. IEEE CIG 2015 Tainan, Taiwan August 31, 2015 – September 2, 2015 “Let’s Save Resources!”: A Dynamic, Collaborative AI for a Multiplayer Environmental Awareness Game Pedro Sequeira, Francisco S. Melo and …
A survey on design and development of an unmanned aerial vehicle (quadcopter)
D BBVL, P Singh – International Journal of Intelligent …, 2016 – emeraldinsight.com
Literature review for Deception detection
G An – 2015 – gc.cuny.edu
Page 1. Literature review for Deception detection by Guozhen An A Second Exam submitted to the Graduate Faculty in Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy, The City University of New York. 2015 Page 2 …
A physical robot’s effect on vocabulary learning
A Wedenborn – 2015 – diva-portal.org
Page 1. IN DEGREE PROJECT COMPUTER SCIENCE AND ENGINEERING 300 , SECOND CYCLE CREDITS , STOCKHOLM SWEDEN 2015 A physical robot’s effect on vocabulary learning ANDREAS WEDENBORN KTH ROYAL INSTITUTE OF TECHNOLOGY …
On Active Passengering: Supporting In-Car Experiences
K Matsumura, DS Kirk – Proceedings of the ACM on Interactive, Mobile …, 2018 – dl.acm.org
Page 1. 154 On Active Passengering: Supporting In-Car Experiences KOHEI MATSUMURA, Ritsumeikan University, Japan DAVID S. KIRK, Northumbria University, UK We describe the development of an interactive car window …
Discovering Topic Trends for Conference Analytics
P LIU – 2017 – search.proquest.com
Page 1. Discovering Topic Trends for Conference Analytics LIU, Pengfei A Thesis Submitted in Partial Fulfilment of the Requirements for the Degree of Doctor of Philosophy in Systems Engineering and Engineering Management The Chinese University of Hong Kong …
Investigating Typing and Take-Over Performance in SAE Level 3-4 Automated Driving.
J KEPLER – 2017 – researchgate.net
Page 1. Submitted by Clemens Schartmüller, BSc. Submitted at Institute for Pervasive Computing Supervisor Prof. Dr. Priv.-Doz. Andreas Riener Linz, October 2017 JOHANNES KEPLER UNIVERSITY LINZ Altenbergerstraße 69 4040 Linz, Österreich www.jku.at DVR 0093696 …
A Self-Adaptive Sliding Window Based Topic Model for Non-uniform Texts
J He, L Li, X Wu – Data Mining (ICDM), 2017 IEEE International …, 2017 – ieeexplore.ieee.org
Page 1. A Self-adaptive Sliding Window based Topic Model for Non-uniform Texts Jin He School of Computing and Information HeFei University of Technology Hefei, China, 230009 Email: jinhe@mail.hfut.edu.cn Lei Li School …
Moving towards the semantic web: enabling new technologies through the semantic annotation of social contents.
C Vicient Monllaó – 2015 – deim.urv.cat
Page 1. Carlos Vicient Monllaó MOVING TOWARDS THE SEMANTIC WEB: Enabling new technologies through the semantic annotation of social contents Ph. D. Thesis Supervised by Dr. Antonio Moreno Department of Computer Science and Mathematics December, 2014 …