Human-Robot Dialog 2016


Human-Robot Dialog / Human-Robot Dialogue

Notes:

ADE Agent Development Environment contains components of the DIARC (Distributed Integrated Affect, Reflection, and Cognition) architecture.

Resources:

Wikipedia:

See also:

Best Robot Speech Recognition Videos


Situated open world reference resolution for human-robot dialogue
T Williams, S Acharya, S Schreitter… – The Eleventh ACM/IEEE …, 2016 – dl.acm.org
Abstract A robot participating in natural dialogue with a human interlocutor may need to discuss, reason about, or initiate actions concerning dialogue-referenced entities. To do so, the robot must first identify or create new representations for those entities, a capability

An implemented theory of mind to improve human-robot shared plans execution
S Devin, R Alami – Human-Robot Interaction (HRI), 2016 11th …, 2016 – ieeexplore.ieee.org
Page 1. An Implemented Theory of Mind to Improve Human-Robot Shared Plans Execution. Sandra Devin CNRS, LAAS, Univ de Toulouse, INP, LAAS, 7 avenue du colonel Roche, F 31400 Toulouse, France Rachid Alami CNRS …

Human–robot interaction review and challenges on task planning and programming
P Tsarouchi, S Makris… – International Journal of …, 2016 – Taylor & Francis
Page 1. Human–robot interaction review and challenges on task planning and programming Panagiota Tsarouchi, Sotiris Makris and George Chryssolouris* Laboratory for Manufacturing Systems and Automation, Department …

Assessing Agreement in Human-Robot Dialogue Strategies: A Tale of Two Wizards
M Marge, C Bonial, KA Pollard, R Artstein… – … on Intelligent Virtual …, 2016 – Springer
Abstract The Wizard-of-Oz (WOz) method is a common experimental technique in virtual agent and human-robot dialogue research for eliciting natural communicative behavior from human partners when full autonomy is not yet possible. For the first phase of our research

Review of semantic-free utterances in social human–robot interaction
S Yilmazyildiz, R Read, T Belpeame… – International Journal of …, 2016 – Taylor & Francis

A real-time human-robot interaction system based on gestures for assistive scenarios
G Canal, S Escalera, C Angulo – Computer Vision and Image …, 2016 – Elsevier
… stochastic segmentation algorithm. In [19], the user can define specific gestures that mean some message in a human-robot dialogue, and in [20] a framework to define user gestures to control a robot is presented. Deep neural …

Learning Multi-Modal Grounded Linguistic Semantics by Playing” I Spy”.
J Thomason, J Sinapov, M Svetlik, P Stone, RJ Mooney – IJCAI, 2016 – ijcai.org
… language. Learning grounded semantics through human-robot dialog allows a system to ac- quire the relevant knowledge without the need for laborious labeling of numerous objects for every potential lexical de- scriptor. A …

Dynamic generation and refinement of robot verbalization
V Perera, SP Selveraj, S Rosenthal… – Robot and Human …, 2016 – ieeexplore.ieee.org
… participant sample size (100 participants). Finally, we demonstrate human-robot dialog that is enabled by our verbalization algorithm and our learned verbalization space language classifier. II. RELATED WORK We identify …

Using human knowledge awareness to adapt collaborative plan generation, explanation and monitoring
G Milliez, R Lallement, M Fiore, R Alami – The Eleventh ACM/IEEE …, 2016 – dl.acm.org
Page 1. Using Human Knowledge Awareness to Adapt Collaborative Plan Generation, Explanation and Monitoring Grégoire Milliez CNRS, LAAS, Univ de Toulouse, INP, 7 avenue du colonel Roche, F 31400 Toulouse, France …

Computational human-robot interaction
A Thomaz, G Hoffman, M Cakmak – Foundations and Trends® …, 2016 – nowpublishers.com
Page 1. Computational Human-Robot Interaction Andrea Thomaz The University of Texas at Austin USA athomaz@ece.utexas.edu Guy Hoffman Cornell University, Ithaca USA hoffman@cornell.edu Maya Cakmak University of Washington, Seattle USA mcakmak@uw.edu …

The MuMMER project: Engaging human-robot interaction in real-world public spaces
ME Foster, R Alami, O Gestranius, O Lemon… – … Conference on Social …, 2016 – Springer
… directions. First, we will apply current state-of-the-art statistical models [13, 15, 21] to the new scenarios and tasks that arise in the context of engaging, entertaining, socially appropriate human-robot dialogue interaction. Second …

Investigating fluidity for human-robot interaction with real-time, real-world grounding strategies
J Hough, D Schlangen – … of the 17th Annual SIGdial Meeting on …, 2016 – pub.uni-bielefeld.de
Page 1. Investigating Fluidity for Human-Robot Interaction with Real-time, Real-world Grounding Strategies Julian Hough and David Schlangen Dialogue Systems Group // CITEC // Faculty of Linguistics and Literature Bielefeld University firstname.lastname@uni-bielefeld.de …

Cognitive Human–Robot Interaction
B Mutlu, N Roy, S Šabanovi? – Springer Handbook of Robotics, 2016 – Springer
… 729–734. [71.136]. T. Fong, C. Thorpe, C. Baur: Collaboration, dialogue, human-robot interaction, Robotics Res. 6, 255–266 (2003)CrossRef. [71.137]. ME Foster, T. By, M. Rickert, A. Knoll: Human-robot dialogue for joint construction tasks, Proc. 8th Int. Conf. …

Situated Language Understanding with Human-like and Visualization-Based Transparency.
L Perlmutter, E Kernfeld… – Robotics: Science and …, 2016 – roboticsproceedings.org
… to the platform. B. Human-like transparency We implement three human-like transparency mechanisms. 1) Speech: The first mechanism is verbal confirmation; a common method in human-robot dialog (Sec. II). The robot generates …

Modeling communicative behaviors for object references in human-robot interaction
H Admoni, T Weng, B Scassellati – Robotics and Automation …, 2016 – ieeexplore.ieee.org
Page 1. Modeling Communicative Behaviors for Object References in Human-Robot Interaction* Henny Admoni1, Thomas Weng2, and Brian Scassellati3 Abstract— This paper presents a model that uses a robot’s verbal and …

Exploiting deep semantics and compositionality of natural language for Human-Robot-Interaction
M Eppe, S Trott, J Feldman – Intelligent Robots and Systems …, 2016 – ieeexplore.ieee.org
Page 1. Exploiting Deep Semantics and Compositionality of Natural Language for Human-Robot-Interaction Manfred Eppe1, Sean Trott1, Jerome Feldman1 Abstract—We are developing a natural language interface for human …

Incremental acquisition of verb hypothesis space towards physical world interaction
L She, J Chai – Proceedings of the 54th Annual Meeting of the …, 2016 – aclweb.org
Page 1. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 108–117, Berlin, Germany, August 7-12, 2016. cO2016 Association for Computational Linguistics Incremental Acquisition …

Are you talking to me?: Improving the Robustness of Dialogue Systems in a Multi Party HRI Scenario by Incorporating Gaze Direction and Lip Movement of Attendees
V Richter, B Carlmeyer, F Lier… – Proceedings of the …, 2016 – dl.acm.org
… ABSTRACT In this paper, we present our humanoid robot Meka, partici- pating in a multi party human robot dialogue scenario. … In this contribution we present our humanoid Meka (Figure 1), participating in a multi party human robot dialogue sce- nario. …

Social robotics
C Breazeal, K Dautenhahn, T Kanda – Springer Handbook of Robotics, 2016 – Springer
… Process. Int. Q. Cogn. Sci. 2(1), 177–198 (2001). S. Chernova, N. DePalma, C. Breazeal: Crowdsourcing real world human-robot dialog and teamwork through online multiplayer games, AAAI Magazine 32(4), 100–111 (2011). …

Instructable Intelligent Personal Agent.
A Azaria, J Krishnamurthy, TM Mitchell – AAAI, 2016 – aaai.org
Page 1. Instructable Intelligent Personal Agent Amos Azaria 1 , Jayant Krishnamurthy 2 , Tom M. Mitchell 1 1 Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213 2 Allen Institute for Artificial Intelligence …

Human–robot planning and learning for marine data collection
T Somers, GA Hollinger – Autonomous Robots, 2016 – Springer

A Discriminative Approach to Grounded Spoken Language Understanding in Interactive Robotics.
E Bastianelli, D Croce, A Vanzo, R Basili, D Nardi – IJCAI, 2016 – pdfs.semanticscholar.org
Page 1. A Discriminative Approach to Grounded Spoken Language Understanding in Interactive Robotics Emanuele Bastianelli,1 Danilo Croce,2 Andrea Vanzo,3 Roberto Basili,2 Daniele Nardi3 1DICII, 2DII, University of Rome …

Driven learning for driving: How introspection improves semantic mapping
R Triebel, H Grimmett, R Paul, I Posner – Robotics Research, 2016 – Springer
… Recent work by Tellex et al. [18] explores active information gathering for human-robot dialog. … Tellex, S., Thaker, P., Deits, R., Kollar, T., Roy, N.: Toward information theoretic human-robot dialog. In: Robotics: Science and Systems (2012). 19. …

Towards Modeling Confidentiality in Persuasive Robot Dialogue
ID Addo, SI Ahamed, WC Chu – … Conference on Smart Homes and Health …, 2016 – Springer
… Consequently, in conversational service robot scenarios (including elderly care use cases), humans often have the expectation that humanoid robots are capable of preserving the privacy and confidentiality of a given human-robot dialogue. …

Task Learning through Visual Demonstration and Situated Dialogue.
C Liu, JY Chai, N Shukla, SC Zhu – AAAI Workshop: Symbiotic Cognitive …, 2016 – aaai.org
… She, L.; Yang, S.; Cheng, Y.; Jia, Y.; Chai, JY; and Xi, N. 2014. Back to the blocks world: Learning new actions through situated human-robot dialogue. In 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 89. …

Some essential skills and their combination in an architecture for a cognitive and interactive robot
S Devin, G Milliez, M Fiore, A Clodic… – arXiv preprint arXiv …, 2016 – arxiv.org
… Symp. on Robot and Human Interactive Communication, pp. 1103–1109, IEEE, 2014. [14] E. Ferreira, G. Milliez, F. Lefevre, and R. Alami, “Users’ belief aware- ness in reinforcement learning-based situated human-robot dialogue management,” in IWSDS, 2015. Page 5. …

Using Knowledge Representation and Reasoning Tools in the Design of Robots.
M Sridharan, M Gelfond – KnowProS@ IJCAI, 2016 – ceur-ws.org
… ple declarative programming and continuous-time planners for path planning in teams [Saribatur et al., 2014], combine a probabilistic extension of ASP with POMDPs for human- robot dialog [Zhang and Stone, 2015], combine logic pro- gramming and reinforcement learning to …

Computational argumentation to support multi-party human-robot interaction: Challenges and advantages
E Black, EI Sklar – Groups in HRI Workshop at the IEEE ROMAN …, 2016 – nms.kcl.ac.uk
… model with two types of patients. Our goal is to evaluate the effectiveness of argumentation-based multi-party human-robot dialogue for improving patient compliance with treatment recommenda- tions. The first case will focus …

Experience-Based Robot Task Learning and Planning with Goal Inference.
V Mokhtari, LS Lopes, AJ Pinho – ICAPS, 2016 – aaai.org
Page 1. Experience-Based Robot Task Learning and Planning with Goal Inference Vahid Mokhtari, Lu?s Seabra Lopes and Armando J. Pinho IEETA, University of Aveiro, Aveiro, Portugal {mokhtari.vahid, lsl, ap}@ua.pt Abstract …

Action-coordinating prosody
NG Ward, S Abu – Speech Prosody, 2016 – pdfs.semanticscholar.org
… Language, vol. 28, pp. 903–922, 2014. [16] G. Skantze, C. Oertel, and A. Hjalmarsson, “User feedback in human-robot dialogue: Task progression and uncertainty,” in HRI Workshop on Timing in Human-Robot Interaction, 2014. [17] G …

Influence of Robot Gender and Speaker Gender on Prosodic Entrainment in HRI
E Strupka, O Niebuhr, K Fischer – Proceedings of the 25th IEEE …, 2016 – researchgate.net
Page 1. ? Abstract—In this paper, we investigate acoustic-prosodic entrainment in human-robot dialog with a focus on gender. We carried out experiments with an equal number of male and female participants (N=16) from different …

Experience-based planning domains: An integrated learning and deliberation approach for intelligent robots
V Mokhtari, LS Lopes, AJ Pinho – Journal of Intelligent & Robotic Systems, 2016 – Springer
… goals and not the actions required. In [30], the goal is identified simply as the differ- ence between the start and end states of an experi- ence guided through human-robot dialog. In another approach, the task model includes …

Jointly Learning Grounded Task Structures from Language Instruction and Visual Demonstration.
C Liu, S Yang, S Saba-Sadiya, N Shukla, Y He… – …, 2016 – pdfs.semanticscholar.org
Page 1. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1482–1492, Austin, Texas, November 1-5, 2016. c 2016 Association for Computational Linguistics Jointly Learning …

Speaky for robots: the development of vocal interfaces for robotic applications
E Bastianelli, D Nardi, LC Aiello, F Giacomelli… – Applied …, 2016 – Springer

Grounded Semantic Role Labeling.
S Yang, Q Gao, C Liu, C Xiong, SC Zhu, JY Chai – HLT-NAACL, 2016 – aclweb.org
Page 1. Proceedings of NAACL-HLT 2016, pages 149–159, San Diego, California, June 12-17, 2016. cO2016 Association for Computational Linguistics Grounded Semantic Role Labeling Shaohua Yang1, Qiaozi Gao1, Changsong Liu1, Caiming Xiong2, …

See you see me: The role of eye contact in multimodal human-robot interaction
TL Xu, H Zhang, C Yu – ACM Transactions on Interactive Intelligent …, 2016 – dl.acm.org
Page 1. 2 See You See Me: The Role of Eye Contact in Multimodal Human-Robot Interaction TIAN (LINGER) XU, Indiana University at Bloomington HUI ZHANG, University of Louisville CHEN YU, Indiana University at Bloomington …

Human-robot interaction strategies for walker-assisted locomotion
CA Cifuentes, A Frizera – 2016 – Springer
Page 1. Springer Tracts in Advanced Robotics 115 Carlos A. Cifuentes Anselmo Frizera Human-Robot Interaction Strategies for Walker-Assisted Locomotion Page 2. Springer Tracts in Advanced Robotics Editors Prof. Bruno …

Resolution of referential ambiguity using dempster-shafer theoretic pragmatics
T Williams, M Scheutz – AAAI Fall Symposium on AI and HRI, 2016 – aaai.org
… Cambridge university press. Deits, R.; Tellex, S.; Thaker, P.; Simeonov, D.; Kollar, T.; and Roy, N. 2013. Clarifying commands with information- theoretic human-robot dialog. Journal of Human-Robot In- teraction 2(2):58–79. Fong, T.; Thorpe, C.; and Baur, C. 2001. …

Improving grounded language acquisition efficiency using interactive labeling
N Pillai, KK Budhraja, C Matuszek – Robotics: Science and …, 2016 – ece.rochester.edu
… models. AI magazine, 32 (4):64–76, 2011. [21] Stefanie Tellex, Pratiksha Thaker, Robin Deits, Dimitar Simeonov, Thomas Kollar, and Nicholas Roy. Toward information theoretic human-robot dialog. Robotics, page 409, 2013.

Verbalization: Narration of Autonomous Robot Experience.
S Rosenthal, SP Selvaraj, MM Veloso – IJCAI, 2016 – ijcai.org
… [Thomason et al., 2015] Jesse Thomason, Shiqi Zhang, Raymond Mooney, and Peter Stone. Learning to interpret natural language commands through human-robot dialog. In Proceedings of the 2015 International Joint Conference on Artificial Intelligence (IJ- CAI), July 2015. …

Program robots manufacturing tasks by natural language instructions
Y Jia, L She, Y Cheng, J Bao… – … (CASE), 2016 IEEE …, 2016 – ieeexplore.ieee.org
… 868–873, IEEE, 2014. [14] L. She, S. Yang, Y. Cheng, Y. Jia, JY Chai, and N. Xi, “Back to the blocks world: Learning new actions through situated human-robot dialogue,” in 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, p. 89, 2014. …

Learning interactive behavior for service robots the challenge of mixed-initiative interaction
P Liu, DF Glas, T Kanda, H Ishiguro – Proceedings of the Workshop on …, 2016 – irc.atr.jp
… Service Robotics, vol. 1, pp. 159-167, 2008. [10] R. Meena, G. Skantze, and J. Gustafson, “A Data-driven Approach to Understanding Spoken Route Directions in Human-Robot Dialogue,” in INTERSPEECH, 2012. [11] D. Brš?i? …

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology
T Zhao – 2016 – pdfs.semanticscholar.org
… Intell. Res.(JAIR), 13:227–303, 2000. [6] Thomas K Harris and Alexander I Rudnicky. Teamtalk: A platform for multi-human-robot dialog research in coherent real and virtual spaces. In Proceedings of the National Conference on Artificial Intelligence, volume 22, page 1864. …

Planning with Task-Oriented Knowledge Acquisition for a Service Robot.
K Chen, F Yang, X Chen – IJCAI, 2016 – ijcai.org
… Tractability of planning with loops. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015. [Thomason et al., 2015] Jesse Thomason, Shiqi Zhang, Raymond Mooney, and Peter Stone. Learning to interpret natural language commands through human-robot dialog. …

Data-Driven HRI: Learning Social Behaviors by Example From Human–Human Interaction
P Liu, DF Glas, T Kanda… – IEEE Transactions on …, 2016 – ieeexplore.ieee.org
Page 1. 988 IEEE TRANSACTIONS ON ROBOTICS, VOL. 32, NO. 4, AUGUST 2016 Data-Driven HRI: Learning Social Behaviors by Example From Human–Human Interaction Phoebe Liu, Dylan F. Glas, Takayuki Kanda, Member, IEEE, and Hiroshi Ishiguro, Member, IEEE …

Social Affordance Tracking over Time-A Sensorimotor Account of False-Belief Tasks
J Bütepage, H Kjellström, D Kragic – Proc. 38th Annual Meeting of the …, 2016 – nada.kth.se
… Biological Cyber- netics, 109(4-5), 453–467. Ferreira, E., Milliez, G., Lefevre, F., & Alami, R. (2015). Users belief awareness in reinforcement learning-based sit- uated human-robot dialogue management. In 7th Interna- tional Workshop on Spoken Dialogue Systems. …

An integrated system for interactive continuous learning of categorical knowledge
D Sko?aj, A Vre?ko, M Mahni?, M Janí?ek… – … of Experimental & …, 2016 – Taylor & Francis
Page 1. JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE, 2016 VOL. 28, NO. 5, 823–848 http://dx.doi.org/10.1080/0952813X.2015.1132268 An integrated system for interactive continuous learning of categorical knowledge …

Task execution based-on human-robot dialogue and deictic gestures
P Yan, B He, L Zhang, J Zhang – Robotics and Biomimetics …, 2016 – ieeexplore.ieee.org
Abstract: Service robots have already been able to implement explicit and simple tasks assigned by human beings, but they still lack the ability to act like humans who can analyze the assigned task and ask questions to acquire supplementary information to resolve

Collaborative Language Grounding Toward Situated Human-Robot Dialogue.
JY Chai, R Fang, C Liu, L She – AI Magazine, 2016 – cse.msu.edu
. To enable situated human-robot dialogue, techniques to support grounded language communication are essential. One particular challenge is to ground human language to a robot’s internal representation of the physical world. Although copresent in a shared

Analysis of empirical results on argumentation-based dialogue to support shared decision making in a human-robot team
MQ Azhar, EI Sklar – … (RO-MAN), 2016 25th IEEE International …, 2016 – ieeexplore.ieee.org
… Current research in human-robot dialogue ex- plores a wide range of opportunities and challenges and represents diverse research areas, including robotics, multi- modal interfaces, natural language processing, spoken dia- …

Toward a model for handling noise in human-robot communication
EI Sklar, E Black – nms.kcl.ac.uk
… collaboratively seek the answer to a ques- tion that neither knows the answer to. Following [34], we apply these models to human-robot dialogue. In this section, we begin by explaining our terminology and reviewing some key …

Tom Williams
M Scheutz – pdfs.semanticscholar.org
… Conference Papers Tom Williams, Saurav Acharya, Stephanie Schreitter, and Matthias Scheutz. Situated open world ref- erence resolution for human-robot dialogue. … 38% acceptance rate, 2016. Tom Williams. Towards more natural human-robot dialogue. …

Toward specifying Human-Robot Collaboration with composite events
J Van den Bergh, FC Lucero, K Luyten… – Robot and Human …, 2016 – ieeexplore.ieee.org
… V. DISCUSSION This paper presented how multiple levels abstractions can be used to create executable specifications of multimodal human-robot collaboration. Finite state machines are used to define the overall human-robot dialog. …

Language Grounding towards Situated Human-Robot Communication
JY Chai – coli.uni-saarland.de
… C. Liu and JY Chai. Learning to Mediate Perceptual Differences in Situated Human-Robot Dialogue. … L. She, S. Yang, Y. Cheng, Y. Jia, JY Chai, and N. Xi, Back to the Blocks World: Learning New Actions through Situated Human-Robot Dialogue, SIGDIAL 2014 Page 13. …

How to improve human-robot interaction with Conversational Fillers
N Wigdor, J de Greeff, R Looije… – Robot and Human …, 2016 – ieeexplore.ieee.org
… 54–63. [20] C. Liu, CT Ishi, H. Ishiguro, and N. Hagita, “Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction,” in Human-Robot Interaction (HRI), 2012 7th ACM/IEEE International Conference on. IEEE, 2012, pp. 285–292. …

Mimic Recognition and Reproduction in Bilateral Human-Robot Speech Communication
A Yuschenko, S Vorotnikov, D Konyshev… – … Conference on Interactive …, 2016 – Springer
… The interpretation of the corresponding emotion on the “face” of service robot make the mutual understanding in human-robot dialogue easier. Together with the usual textual messages (from “speech-to-text”) – the emotional messages can be used throughout the dialogue. …

SmartTalk: A Learning-Based Framework for Natural Human-Robot Interaction
C Fabbri, J Sattar – Computer and Robot Vision (CRV), 2016 …, 2016 – ieeexplore.ieee.org
… have investigated risk functions for a safer navigation of a robotic nurse in the care of the elderly [9]. Researchers have looked into POMDP formulations of human-robot dialog mod- els [10], [11] and planning cost models for efficient human– robot interaction tasks [12], [13]. …

Multi-modal referring expressions in human-human task descriptions and their implications for human-robot interaction
S Gross, B Krenn, M Scheutz – Interaction Studies, 2016 – jbe-platform.com
Page 1. Interaction Studies 17:2 (2016), 180–210. doi 10.1075/is.17.2.02gro issn 1572–0373 / e-issn 1572–0381 © John Benjamins Publishing Company Multi-modal referring expressions in human-human task descriptions and their implications for human-robot interaction …

Applications of Adaptive Learning Mechanism based on Human-Robot Interaction
L Devi – pdfs.semanticscholar.org
… In Proceedings of the 2012 Conference on Social Robotics, LNCS, 238–247. Springer. [6] Briggs, G., and Scheutz, M. 2012b. Multi-modal belief updates in multi-robot human-robot dialogue interactions. In Proceedings of AISB 2012. Briggs, G., and Scheutz, M. 2013. …

Empirical Study of Humor Support in Social Human-Robot Interaction
L Bechade, GD Duplessis, L Devillers – International Conference on …, 2016 – Springer
… Mode adaption can be elicited by many kind of humor. Norrick [15] provides examples of spontaneous conversational punning that elicits further punning from other participants. 3 Data Collection of Social Human-Robot Dialog. 3.1 Interaction Scenarios. …

Human-Robot Interaction for Assisting Human Locomotion
CA Cifuentes, A Frizera – Human-Robot Interaction Strategies for Walker …, 2016 – Springer
… As it can be observed, gesture and mimic recognition is an ongoing research activity in the fields of human-computer and the human-robot dialog. Recently, there are some related works that use visual sensing for smart walkers in indoor environments. …

Semantic Likelihood Models for Bayesian Inference in Human-Robot Interaction
N Sweet – 2016 – search.proquest.com
Semantic Likelihood Models for Bayesian Inference in Human-Robot Interaction. Abstract. Autonomous systems, particularly unmanned aerial systems (UAS), remain limited in au- tonomous capabilities largely due to a poor understanding of their environment. …

Towards An Architecture for Representation, Reasoning and Learning in Human-Robot Collaboration
M Sridharan – 2016 AAAI Spring Symposium Series, 2016 – aaai.org
… planning (Kaelbling and Lozano-Perez 2013), couple declarative programming and continuous-time planners for path planning in robot teams (Saribatur, Erdem, and Patoglu 2014), or combine a probabilistic extension of ASP with POMDPs for human-robot dialog (Zhang and …

A Unified Knowledge Representation System for Robot Learning and Dialogue
N Shukla – 2016 – search.proquest.com
… She, L.; Yang, S.; Cheng, Y.; Jia, Y.; Chai, JY; and Xi,N. 2014. Back to the blocks world: Learning new actions through situated human-robot dialogue. In 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 89. …

Laser Graphics in Augmented Reality Applications for Real-World Robot Deployment
G Seet, V Iastrebov, DQ Huy… – Recent Advances in …, 2016 – intechopen.com
… 2003, pp. 255–266. 5 – ME Foster, M. Giuliani, A. Isard, C. Matheson, J. Oberlander, and A. Knoll, “Evaluating description and reference strategies in a cooperative human-robot dialogue system,” in IJCAI, 2009, pp. 1818–1823. 6 …

Investigating Breakdowns in HRI: A Conversation Analysis Guided Single Case Study of a Human-NAO Communication in a Museum Environment
B Arend, P Sunnen, P Caire – waset.org
… 21, no 3, 2011, pp. 393-410. http://dx.doi.org/10.1075/prag.21.3.05hol [20] B. Mutlu, S. Andrist & A. Sauppé, “Enabling human-robot dialogue”, in J. Markowitz (ed.), Robots that Talk and Listen. Berlin: De Gruyter, 2015, pp. 81-124.

WrightEagle@ Home 2016 Team Description Paper
W Shuai, J Liu, X Wang, F Zhou, X Chen – ais.uni-bonn.de
… In general service scenarios, our robot is driven by human speech orders, as input of the robot’s Human-Robot Dialogue module. Through the Speech Understanding module, the utterances from users are translated into the internal representations of the robot. …

Contexts for Symbiotic Autonomy: Semantic Mapping, Task Teaching and Social Robotics.
R Capobianco, G Gemignani, L Iocchi, D Nardi… – AAAI Workshop …, 2016 – aaai.org
… She, L.; Yang, S.; Cheng, Y.; Jia, Y.; Chai, JY; and Xi, N. 2014. Back to the blocks world: Learning new ac- tions through situated human-robot dialogue. In 15th Annual Meeting of the Special Interest Group on Discourse and Di- alogue, 89. Takayama, L., and Pantofaru, C. 2009. …

Sean Andrist
SC Laude – Frontiers in Psychology – pdfs.semanticscholar.org
… Perceptual Common Ground in Communication with Embodied Agents. Submitted to Topics in Cognitive Science. Book Chapters • Mutlu, B., Andrist, S., and Sauppé, A. Enabling Human-Robot Dialogue. In J. Markowitz (Ed.) Robots that Talk and Listen. De Gruyter. …

Design of human-robot interactive interface based on identity recognition
P Ai, QJ Zhao, ZN Ke, CH Huang… – Audio, Language and …, 2016 – ieeexplore.ieee.org
… Fig. 5 (b) is a human- robot interactive interface, which mainly includes the identification area, robot vision, human-robot dialogue area and the application area. Page 4. 543 Fig. 5. Experimental environment and human-robot interactive interface …

Voice Dialogue with a Collaborative Robot Driven by Multimodal Semantics
A Kharlamov, K Ermishin – International Conference on Interactive …, 2016 – Springer
… Within the framework of human-robot dialogue there are issues related with recognizing of the meaning of the sentences addressed to robot by natural language [2], which solves by using of special dimensional language for description processes and effective understanding …

Continuously Improving Natural Language Understanding for Robotic Systems through Semantic Parsing, Dialog, and Multi-modal Perception
J Thomason – 2016 – pdfs.semanticscholar.org
… 9 3 Learning to Interpret Natural Language Commands through Human-Robot Dialog 9 3.1 Methods . … 3 Learning to Interpret Natural Language Commands through Human-Robot Dialog Intelligent robots need to understand requests from naive users through natural language. …

Collaborative autonomous sensing with Bayesians in the loop
N Ahmed – … /Unattended Sensors and Sensor Networks XII, 2016 – spiedigitallibrary.org
Page 1. PROCEEDINGS OF SPIE SPIEDigitalLibrary.org/conference-proceedings- of-spie Collaborative autonomous sensing with Bayesians in the loop Nisar Ahmed Downloaded From: https://www.spiedigitallibrary.org/conference …

Reference Resolution in Situated Dialogue with Learned Semantics.
X Li, K Boyer – SIGDIAL Conference, 2016 – aclweb.org
Page 347. Proceedings of the SIGDIAL 2016 Conference, pages 329–338, Los Angeles, USA, 13-15 September 2016. cO2016 Association for Computational Linguistics Reference Resolution in Situated Dialogue with Learned …

Recommender Interfaces: The More Human-Like, the More Humans Like
M Staffa, S Rossi – International Conference on Social Robotics, 2016 – Springer
… In: ICRA, pp. 4138–4142. IEEE (2002). 5. Caccavale, R., Leone, E., Lucignano, L., Rossi, S., Staffa, M., Finzi, A.: Attentional regulations in a situated human-robot dialogue. In: The 23rd IEEE International Symposium on Robot and Human Interactive Communication, pp. …

Estimating the User’s State before Exchanging Utterances Using Intermediate Acoustic Features for Spoken Dialog Systems.
Y Chiba, T Nose, M Ito, A Ito – IAENG International Journal of Computer …, 2016 – iaeng.org
… For instance, Satake et al. studied how to find an interaction target and how to approach the person in human-robot dialog [4], Hudson et al. examined a method to predict the willingness to be interrupted of a user in a working environment [5], and Michalowski et al. …

KR 3 L: An Architecture for Knowledge Representation, Reasoning and Learning in Human-Robot Collaboration
M Sridharan – homepages.engineering.auckland …
… Researchers have designed architectures that combine de- terministic and probabilistic algorithms for task and motion planning [Kaelbling and Lozano-Perez, 2013], or combine a probabilistic extension of ASP with POMDPs for human- robot dialog [Zhang and Stone, 2015]. …

Making Turn-Taking Decisions for an Active Listening Robot for Memory Training
M Johansson, T Hori, G Skantze, A Höthker… – … Conference on Social …, 2016 – Springer
… and avoid non-sequitur responses. The model was trained on human-robot dialogue data collected in a Wizard-of-Oz setting, and evaluated in a fully autonomous version of the same dialogue system. Compared to a baseline …

Shaping dialogues with a humanoid robot based on an E-learning system
S Matsuura, M Naito – … Science & Education (ICCSE), 2016 11th …, 2016 – ieeexplore.ieee.org
… The dialogue can be an oral storytelling to share a narrative with a few exchanges of words. Particularly, in the human–driven human–robot dialogue, the human can play a supporting role to permit the robot to tell the main story. …

Things that Make Robots Go HMMM: Heterogeneous Multilevel Multimodal Mixing to Realise Fluent, Multiparty, Human-Robot Interaction
NCEDD Reidsma – researchgate.net
… Language. I. INTRODUCTION THE main objective of this project is to bring forward the state of the art in fluent human-robot dialogue by improv- ing the integration between deliberative and (semi)autonomous behaviour control. The …

Heterogeneous Multi-Modal Mixing
D Reidsma, D Davison, E Dertien – hmi.ewi.utwente.nl
… 1 Objective and Background The main objective of this project is to bring forward the state of the art in fluent human- robot dialog by improving the integration between deliberative and (semi)autonomous / reactive behavior control. …

A task manager using an ontological framework for a HARMS-based system
AR Wagoner, ET Matson – Journal of Ambient Intelligence and Humanized …, 2016 – Springer
… In: 2015 6th international conference on automation, robotics and applications (ICARA), pp 484–489. doi:10.1109/ICARA.2015.7081196. Skubic M, Shultz D, Adams W (2002) Using spatial language in a human–robot dialogue. …

Recognizing Intention from Natural Language: Clarification Dialog and Construction Grammar
S Trott, M Eppe, J Feldman – publications.eppe.eu
… [13] Joyce Y Chai, Lanbo She, Rui Fang, Spencer Ottarson, Cody Littley, Changsong Liu, and Kenneth Hanson. Collaborative effort towards common ground in situated human-robot dialogue. … Clarifying Commands with information-Theoretic Human-Robot Dialog. …

Architectural Mechanisms for Situated Natural Language Understanding in Uncertain and Open Worlds.
T Williams – AAAI, 2016 – aaai.org
… The contribution of this dissertation will thus be a set of architectural mechanisms for natural language understand- ing and generation in uncertain and open worlds, extending the state of the art on a variety of problems important for natural human-robot dialogue. …

Psychological Evaluation on Influence of Appearance and Synchronizing Operation of Android Robot
K Tanaka, M Yoshikawa, Y Wakita… – International Symposium …, 2016 – Springer
… Int. J. Hum. Comput. Stud. 59(1), 119–155 (2003)CrossRefGoogle Scholar. 5. Liu, C., et al.: Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction. In: Proceedings of ACM/IEEE International Conference on Human–Robot Interaction, pp. …

The role of the vocal stream in telepresence communication
BS Bamoallem, AJ Wodehouse – Proceedings of the 9th ACM …, 2016 – dl.acm.org
… system. A questionnaire and video recorded data will be used in our analysis. 5. REFERENCES 1. C. Liu, et al., “Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction,” Proc. Human- Robot …

Intelligent Virtual Agents: 16th International Conference, IVA 2016, Los Angeles, CA, USA, September 20–23, 2016, Proceedings
D Traum, W Swartout, P Khooshabeh, S Kopp… – 2016 – books.google.com
… Agents….. 479 Masato Fukuda, Hung-Hsuan Huang, Naoki Ohta, and Kazuhiro Kuwabara Page 17. XVI Contents Assessing Agreement in Human-Robot Dialogue Strategies: A Tale of Two Wizards….. 484 …

Learning functional argument mappings for hierarchical tasks from situation specific explanations
G Suddrey, M Eich, F Maire, J Roberts – Australasian Joint Conference on …, 2016 – Springer
… 49–56. IEEE (2007). 8. She, L., Yang, S., Cheng, Y., Jia, Y., Chai, JY, Xi, N.: Back to the blocks world: learning new actions through situated human-robot dialogue. In: 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, vol. …

Social Feedback For Robotic Collaboration
E Wu, N Gopalan, J MacGlashan, S Tellex, LLS Wong – 2016 – h2r.cs.brown.edu
Page 1. Social Feedback For Robotic Collaboration Emily Wu Professor Stefanie Tellex Professor Jeff Huang Brown University May 2016 Page 2. Abstract Robotic collaboration requires not just communication from the human to the robot, but also from the robot to the human. …

Addressing Knowledge Integration with a Frame-Driven Approach
L Asprino – European Knowledge Acquisition Workshop, 2016 – Springer
… s/he prefers. The topic of a human-robot dialogue may spread from the user’s personal memories to news. In order to interact with a user, a robot’s KB must contain the information relevant for the discourse. A possible solution …

Standardization of a Heterogeneous Robots Society Based on ROS
I Rodriguez, E Jauregi, A Astigarraga, T Ruiz… – Robot Operating System …, 2016 – Springer
… Foster et al. [7] propose a human-robot dialogue system for the robot JAST, where the user and the robot work together to assemble wooden construction toys on a common workspace, coordinating their actions through speech, gestures, and facial displays. …

Curious partner: An approach to realize common ground in human-autonomy collaboration
SS Mehta, C Ton, EA Doucette… – Systems, Man, and …, 2016 – ieeexplore.ieee.org
… Cambridge university press, 1996. [2] J. Chai, L. She, R. Fang, S. Ottarson, C. Littley, C. Liu, and K. Hanson, “Collaborative effort towards common ground in situated human-robot dialogue,” in ACMIIEEE International Conference on Human-Robot Interaction. …

Robust comprehension of natural language instructions by a domestic service robot
T Kobori, T Nakamura, M Nakano, T Nagai… – Advanced …, 2016 – Taylor & Francis
Page 1. ADVANCED ROBOTICS, 2016 VOL. 30, NO. 24, 1530–1543 http://dx.doi. org/10.1080/01691864.2016.1252689 FULL PAPER Robust comprehension of natural language instructions by a domestic service robot Takahiro …

Teach robots understanding new object types and attributes through natural language instructions
J Bao, Z Hong, H Tang, Y Cheng… – … (ICST), 2016 10th …, 2016 – ieeexplore.ieee.org
… [13] M. Johnson-Roberson, J. Bohg, G. Skantze, and J. Gustafson. Enhanced visual scene understanding through human-robot dialog. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011. [14] C. Liu, R. Fang, L. She, and JY Chai. …

Human Computer Interaction And Natural Hand Gestures Recognition System
S Mittal, G Singla, A Dinarpur – 2016 – ijedr.org
… every single gesture type. After this method was tested in area of the JAST Human Robot dialog arrangement, it categorized extra than 93% of gestures correctly. This algorithm proceeds in three major steps. Early pace is to …

MIOM: A Mixed-Initiative Operational Model for Robots in Urban Search and Rescue
M Gianni, F Nardi, F Ferri, F Cantucci… – World Academy of …, 2016 – waset.org
… In [16], an approach for semi-autonomous navigation is proposed. This approach relies on the automatic detection of interesting navigational points and a human-robot dialog aimed at inferring the user’s intended action. During …

Representing and Reasoning with Logical and Probabilistic Knowledge on Robots
M Sridharan, M Gelfond – pdfs.semanticscholar.org
… for task and motion planning [Kaelbling and Lozano-Perez, 2013], couple declarative programming and continuous-time plan- ners for path planning in teams [Saribatur et al., 2014], combine a probabilistic extension of ASP with POMDPs for human-robot dialog [Zhang and …

BILGE MUTLU, CURRICULUM VITAE
F Researcher Intern – pages.cs.wisc.edu
… In R. Bahr & E. Silliman (Eds.) Handbook of Communication Disorders. Routledge. B.1 MUTLU, B., ANDRIST, S.(S), & SAUPPÉ, A.(S) (2014). Enabling Human-Robot Dialogue. In J. Markowitz (Ed.) Robots that Talk and Listen. De Gruyter. JOURNAL ARTICLES 2015 …

KURE: a Two-Way Adaptive System for Intuitive Robot Control
C Melidis, D Marocco – researchgate.net
… 21, pp. 1–8, 2014. [6] S. Bodiroza, HI Stern, and Y. Edan, “Dynamic gesture vocabulary design for intuitive human-robot dialog,” Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interac- tion – HRI ’12, p. 111, 2012. …

Virtour: Telepresence System for Remotely Operated Building Tours
P Lankenau – 2016 – pdfs.semanticscholar.org
… 5 Page 6. al., 2016) as well through human-robot dialog (Thomason et al., 2015). … 16 Page 17. Thomason, J., Zhang, S., Mooney, R., & Stone, P. (2015). Learning to interpret natural language commands through human-robot dialog. …

Navigational Instruction Generation as Inverse Reinforcement Learning with Neural Machine Translation
AF Daniele, M Bansal, MR Walter – arXiv preprint arXiv:1610.03164, 2016 – arxiv.org
Page 1. Navigational Instruction Generation as Inverse Reinforcement Learning with Neural Machine Translation Andrea F. Daniele TTI-Chicago, USA afdaniele@ttic.edu Mohit Bansal UNC Chapel Hill, USA mbansal@cs.unc.edu …

The OFAI Multimodal Task Description Corpus
S Gross, B Krenn – pdfs.semanticscholar.org
… Williams, T., Acharya, S., Schreitter, S., and Scheutz, M. (2016). Situated open world reference resolution for human-robot dialogue. In Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016), Christchurch, New Zealand. …

Learning Qualitative Spatial Relations for Robotic Navigation.
A Boularias, F Duvallet, J Oh, A Stentz – IJCAI, 2016 – ri.cmu.edu
… In Proceedings of the 25th AAAI Conference on Artificial Intelligence, 2011. [Tellex et al., 2012] Stefanie Tellex, Pratiksha Thaker, Robin Deits, Thomas Kollar, and Nicholas Roy. Toward informa- tion theoretic human-robot dialog. In Robotics: Science and Systems IIX, 2012. …

Supporting Spoken Assistant Systems with a Graphical User Interface that Signals Incremental Understanding and Prediction State
C Kennington, D Schlangen – Proceedings of the 17th Annual …, 2016 – pub.uni-bielefeld.de
Page 1. Supporting Spoken Assistant Systems with a Graphical User Interface that Signals Incremental Understanding and Prediction State Casey Kennington and David Schlangen Boise State University and DSG / CITEC / Bielefeld …

Analytic approach for natural language based supervisory control of robotic manipulations
Y Cheng, J Bao, Y Jia, Z Deng… – … (ROBIO), 2016 IEEE …, 2016 – ieeexplore.ieee.org
… 9, pp. 21 054–21 074, 2015. [19] L. She, S. Yang, Y. Cheng, Y. Jia, JY Chai, and N. Xi, “Back to the blocks world: Learning new actions through situated human-robot dialogue,” in 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, vol. 89, 2014. 336

Design of a Low-Cost Platform for Autonomous Mobile Service Robots
E Eaton, C Mucchiani, M Mohan, D Isele, JM Luna… – pdfs.semanticscholar.org
… [Kollar et al., 2013] Thomas Kollar, Vittorio Perera, Daniele Nardi, and Manuela Veloso. Learning environmental knowledge from task-based human-robot dialog. In IEEE International Conference on Robotics and Automation (ICRA), pages 4304–4309. IEEE, 2013. …

AI’s10 to Watch
D Zeng – 2016 – nlp.cs.rpi.edu
… Methods based on information-theoretic human-robot dialog enable a robot to use ordinary language to explain what it needs to an untrained person. The human provides help that enables the robot to recover from its failure and continue operating autonomously. …

Physical Causality of Action Verbs in Grounded Language Understanding.
Q Gao, M Doering, S Yang, JY Chai – ACL (1), 2016 – aclweb.org
Page 1. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1814–1824, Berlin, Germany, August 7-12, 2016. cO2016 Association for Computational Linguistics Physical Causality …

Introspective classification for robot perception
H Grimmett, R Triebel, R Paul… – … International Journal of …, 2016 – journals.sagepub.com

Networked Robots
D Song, K Goldberg, NY Chong – Springer Handbook of Robotics, 2016 – Springer
… Graphics 31(5), 1–12 (2012)CrossRef. M. Johnson-Roberson, J. Bohg, G. Skantze, J. Gustafson, R. Carlson, B. Rasolzadeh, D. Kragic: Enhanced visual scene understanding through human-robot dialog, Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS) (2011) pp. …

Speech driven trunk motion generating system based on physical constraint
K Sakai, T Minato, CT Ishi… – Robot and Human …, 2016 – ieeexplore.ieee.org
Page 1. Speech Driven Trunk Motion Generating System based on Physical Constraint Kurima Sakai1,2, Takashi Minato2, Carlos T. Ishi2, and Hiroshi Ishiguro1,2 Abstract— We developed a method to automatically generate …

Detecting Target Objects by Natural Language Instructions Using an RGB-D Camera
J Bao, Y Jia, Y Cheng, H Tang, N Xi – Sensors, 2016 – mdpi.com
Controlling robots by natural language (NL) is increasingly attracting attention for its versatility, convenience and no need of extensive training for users. Grounding is a crucial challenge of this problem to enable robots to understand NL instructions from humans. This paper mainly …

Polite Interactions with Robots
L Benotti, P BLACKBURN – What Social Robots Can and Should …, 2016 – books.google.com
… The fast downward planning system. Journal of Artificial Intelligence Research, 26 (1): 191– 246, July 2006. ME Foster, M. Giuliani, A. Isard, C. Matheson, J. Oberlander, and A. Knoll. Evaluating description and reference strategies in a cooperative human-robot dialogue system. …

An Analysis of Using Semantic Parsing for Speech Recognition
R Corona – cs.utexas.edu
Page 1. An Analysis of Using Semantic Parsing for Speech Recognition Rodolfo Corona Abstract This thesis explores the use of semantic parsing for improving speech recognition performance. Specifically, it explores how a semantic parser may be used in order …

Towards self-confidence in autonomous systems
N Sweet, NR Ahmed, U Kuter, C Miller – AIAA Infotech@ Aerospace, 2016 – arc.aiaa.org
Page 1. Towards Self-Confidence in Autonomous Systems Nicholas Sweet?, Nisar R. Ahmed†, Ugur Kuter‡, and Christopher Miller§ I. Introduction Modern civilian and military systems have created a demand for sophisticated …

Generating machine-executable plans from end-user’s natural-language instructions
R Liu, X Zhang – arXiv preprint arXiv:1611.06468, 2016 – arxiv.org
Page 1. Generating machine-executable plans from end-user’s natural-language instructions Rui Liu, Xiaoli Zhang* Department of Mechanical Engineering, Colorado School of Mines, Golden, Colorado, 80401, USA {rliu, xlzhang …

A model for verifiable grounding and execution of complex natural language instructions
A Boteanu, T Howard, J Arkin… – Intelligent Robots and …, 2016 – ieeexplore.ieee.org
… Association for Computational Linguistics, vol. 1, pp. 49–62, 2013. [16] J. Thomason, S. Zhang, R. Mooney, and P. Stone, “Learning to interpret natural language commands through human-robot dialog,” 2015. [17] LP Kaelbling and T …

Understanding user instructions by utilizing open knowledge for service robots
D Lu, F Wu, X Chen – arXiv preprint arXiv:1606.02877, 2016 – arxiv.org
… III. SYSTEM OVERVIEW The overall architecture of our system is shown in Figure 2. As we can see, the human-robot dialog system transcribes spoken utterances into text sentences and manages the dialog with users. Each …

Evaluating SpeakyAcutattile: A System Based on Spoken Language for Ambient Assisted Living
F Fracasso, G Cortellessa, A Cesta… – Italian Forum of Ambient …, 2016 – Springer
… system. User Model User-Adap Inter 12(2–3):111–137CrossRefMATHGoogle Scholar. 24. Foster ME, Giuliani M, Knoll A (2009) Comparing objective and subjective measures of usability in a human-robot dialogue system. In …

Language based shared control of a mobile-manipulator robotic assistant for quadriplegics
C Kaur – 2016 – search.proquest.com
… is started. Next, inherent exibility of the natural language gives rise to ambiguities. Researchers in [4447] try to resolve this using human-robot dialog. In [44], an approach to learn meaning of unseen words is proposed. If an …

Survey on 3D hand gesture recognition
H Cheng, L Yang, Z Liu – … on Circuits and Systems for Video …, 2016 – ieeexplore.ieee.org
Page 1. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 26, NO. 9, SEPTEMBER 2016 1659 Survey on 3D Hand Gesture Recognition Hong Cheng, Senior Member, IEEE, Lu Yang, Member, IEEE, and Zicheng Liu, Fellow, IEEE …

Aural servo: towards an alternative approach to sound localization for robot motion control
A Magassouba – 2016 – tel.archives-ouvertes.fr
Page 1. Aural servo : towards an alternative approach to sound localization for robot motion control Aly Magassouba To cite this version: Aly Magassouba. Aural servo : towards an alternative approach to sound localization for robot motion control. Robotics [cs.RO]. …

Grounding robot motion in natural language and visual perception
SA Bronikowski – 2016 – search.proquest.com
… robot). 30. 2.21 Back to the Blocks World: Learning New Actions through Situated Human-Robot Dialogue [27]. She et al. explore teaching a physical manipulator arm to interact with objects through student-teacher dialogue. …

Use of character information by autonomous robots based on character string detection in daily environments
K Yamazaki, T Nishino, K Nagahama, K Okada… – Robotica, 2016 – cambridge.org
Page 1. Robotica (2016) volume 34, pp. 1113–1127. © Cambridge University Press 2014 doi:10.1017/S0263574714002094 Use of character information by autonomous robots based on character string detection in daily environments …

Združevanje ve?modalne informacije in ?ezmodalno u?enje v umetnih spoznavnih sistemih
A Vre?ko – 2016 – eprints.fri.uni-lj.si
Page 1. University of Ljubljana Faculty of Computer and Information Science Alen Vrecko Merging Multi-Modal Information and Cross-Modal Learning in Artificial Cognitive Systems MASTER THESIS Supervisor: assoc. prof. dr. Danijel Skocaj Ljubljana, 2016 Page 2. Page 3. …

Gesture-Based Controls for Robots: Overview and Implications for Use by Soldiers
LR Elliott, SG Hill, M Barnes – 2016 – dtic.mil
… on social aspects of human-robot communications. Muto et al. (2009) explored options for human-robot dialogue for situations involving a social robot to assist elderly users. Findings from an investigation of human-human dialogue …

Probabilistic self-localisation on a qualitative map based on occlusions
PE Santos, MF Martins, V Fenelon… – … of Experimental & …, 2016 – Taylor & Francis
Page 1. JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE, 2016 VOL. 28, NO. 5, 781–799 http://dx.doi.org/10.1080/0952813X.2015.1132265 Probabilistic self-localisation on a qualitative map based on occlusions …

A New Formal Approach to Semantic Parsing of Instructions and to File Manager Design
AA Razorenov, VA Fomichov – International Conference on Database and …, 2016 – Springer
… 992–1002, Beijing, China, 26–31 July 2015. https://?aclweb.?org/?anthology/?P/?P15/?P15- 1096.?pdf. Accessed 13 Mar 2016. 19. She, L., Yang, S., Cheng, Y., Jia, Y., Cha, J, Xi, N.: Back to the blocks world: learning new actions through situated human-robot dialogue. …

Adapting child-robot interaction to reflect age and gender
A Sandygulova – 2016 – search.proquest.com
Adapting Child-Robot Interaction to Reflect Age and Gender. Abstract. Research and commercial robots have infiltrated homes, hospitals and schools, becoming attractive and proving impactful for children’s healthcare, therapy, edutainment, and other applications. …

The Media-Sphere as Dream
SB Schafer – Exploring the Collective Unconscious in the Age of …, 2016 – books.google.com
… and live-training environments we are developing to achieve squad overmatch and to optimize Soldier performance, both mentally and physi- cally.” The Army and USC are,“exploring the potential of developing a flexible multi-modal human- robot dialogue that includes natural …

Multimodal Attention with Top-Down Guidance
B Schauerte – … Computational Attention for Scene Understanding and …, 2016 – Springer

Computer Vision and Natural Language Processing: Recent Approaches in Multimedia and Robotics
P Wiriyathammabhum, D Summers-Stay… – ACM Computing …, 2016 – dl.acm.org
Page 1. 71 Computer Vision and Natural Language Processing: Recent Approaches in Multimedia and Robotics PERATHAM WIRIYATHAMMABHUM, University of Maryland, College Park DOUGLAS SUMMERS-STAY, US …

Mining User Intents to Compose Services for End-Users
Y Zhao, S Wang, Y Zou, J Ng… – Web Services (ICWS) …, 2016 – ieeexplore.ieee.org
Page 1. Mining User Intents to Compose Services for End-Users Yu Zhao, Shaohua Wang, Ying Zou Queen’s University, Kingston, Ontario, Canada yu.zhao@queensu. ca, shaohua@cs.queensu.ca, ying.zou@queensu.ca Joanna …

Techno-Ethical Case-Studies in Robotics, Bionics, and Related AI Agent Technologies.
T Christaller, C Laschi, M Nagenborg, P Salvini – academia.edu
Page 1. See discussions, stats, and author profiles for this publication at: https://www. researchgate.net/publication/258833649 Techno-Ethical Case-Studies in Robotics, Bionics, and Related AI Agent Technologies. CHAPTER · JANUARY 2008 READS 87 …

Prototype-based class-specific nonlinear subspace learning for large-scale face verification
A Iosifidis, M Gabbouj – … Tools and Applications (IPTA), 2016 6th …, 2016 – ieeexplore.ieee.org
Page 1. Prototype-based Class-Specific Nonlinear Subspace Learning for Large-Scale Face Verification Alexandros Iosifidis and Moncef Gabbouj Department of Signal Processing, Tampere University of Technology, Finland …

Sensory-motor and cognitive transformations
S Schneegans – Dynamic thinking: A primer on dynamic field …, 2016 – books.google.com
Page 188. 7 Sensory-Motor and Cognitive Transformation SEBASTIAN SCHNEEGANS This chapter will continue and complement the line of concepts introduced in the previ-ous chapters of Part 2. In Chapter 5, we considered …

Extensible Command Parsing Method for Network Device
N Li, L Zhang, Q Zhang, F Li – Complex, Intelligent, and …, 2016 – ieeexplore.ieee.org
… Control Systems Technology, vol. 23, no. 3, pp. 924–936, 2015. [10] J. Thomason, S. Zhang, R. Mooney, and P. Stome, “Learning to interpret natural language commands through human-robot dialog,” in Proc. of IJCAI, 2015. …

Improving the performance against force variation of EMG controlled multifunctional upper-limb prostheses for transradial amputees
AH Al-Timemy, RN Khushaba… – … on Neural Systems …, 2016 – ieeexplore.ieee.org
Page 1. 650 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 24, NO. 6, JUNE 2016 Improving the Performance Against Force Variation of EMG Controlled Multifunctional Upper-Limb Prostheses for Transradial Amputees …