Human-Robot Dialog 2013


Notes:

ADE Agent Development Environment contains components of the DIARC (Distributed Integrated Affect, Reflection, and Cognition) architecture.

Resources:

See also:

Best Robot Speech Recognition Videos | Human-Robot Dialog 2011 | Human-Robot Dialog 2012


Toward information theoretic human-robot dialog S Tellexll, P Thakerll, R Deitsl, D Simeonovl, T Kollar… – Robotics, 2013 – books.google.com Abstract—Our goal is to build robots that can robustly interact with humans using natural language. This problem is challenging because human language is ?lled with ambiguity, and furthermore, due to limitations in sensing, the robot’s perception of its environment … Cited by 14 Related articles All 9 versions

Clarifying commands with information-theoretic human-robot dialog R Deits, S Tellex, P Thaker… – Journal of Human …, 2013 – humanrobotinteraction.org Abstract Our goal is to improve the efficiency and effectiveness of natural language communication between humans and robots. Human language is frequently ambiguous, and a robot’s limited sensing makes complete understanding of a statement even more … Cited by 8 Related articles All 6 versions

Learning environmental knowledge from task-based human-robot dialog T Kollar, V Perera, D Nardi… – Robotics and Automation ( …, 2013 – ieeexplore.ieee.org Abstract—This paper presents an approach for learning environmental knowledge from task- based human-robot dialog. Previous approaches to dialog use domain knowledge to constrain the types of language people are likely to use. In contrast, by introducing a joint … Cited by 5 Related articles All 6 versions

Planning for Human–Robot Interaction in Socially Situated Tasks F Broz, I Nourbakhsh, R Simmons – International Journal of Social …, 2013 – Springer Page 1. Int J Soc Robot (2013) 5:193–214 DOI 10.1007/s12369-013-0185-z Planning for Human–Robot Interaction in Socially Situated Tasks The Impact of Representing Time and Intention Frank Broz · Illah Nourbakhsh · Reid Simmons … Cited by 28 Related articles All 4 versions

Exploring the effects of gaze and pauses in situated human-robot interaction G Skantze, A Hjalmarsson, C Oertel – … of the 14th Annual Meeting of …, 2013 – sigdial.org … dialogue. In that study, the human was the instruction-giver. In the current study, we use the same paradigm for a human-robot dialogue, but here the robot is the instruction-giver and the human is the instruction-follower. This … Cited by 7 Related articles All 8 versions

Knowledge acquisition through human–robot multimodal interaction G Randelli, TM Bonanni, L Iocchi, D Nardi – Intelligent Service Robotics, 2013 – Springer Page 1. Intel Serv Robotics (2013) 6:19–31 DOI 10.1007/s11370-012-0123-1 SPECIAL ISSUE Knowledge acquisition through human–robot multimodal interaction Gabriele Randelli · Taigo Maria Bonanni · Luca Iocchi · Daniele Nardi … Cited by 8 Related articles All 6 versions

An extensible architecture for robust multimodal human-robot communication S Rossi, E Leone, M Fiore, A Finzi… – Intelligent Robots and …, 2013 – ieeexplore.ieee.org … To solve this problem, in future work, we propose to introduce another layer, called Dialogue Manager, which interacts with the fusion engine to integrate the information about the human-robot dialogue context in interpretation of the user commands. … Cited by 6 Related articles

Computational mechanisms for mental models in human-robot interaction M Scheutz – Virtual Augmented and Mixed Reality. Designing and …, 2013 – Springer … maintained in order to improve team performance. Page 9. 312 M. Scheutz References 1. Briggs, G., Scheutz, M.: Multi-modal Belief Updates in Multi-Robot Human-Robot Dialogue Interactions. In: Proceedings of AISB 2012 (2012) 2 … Cited by 5 Related articles All 2 versions

Engaging robots: easing complex human-robot teamwork using backchanneling MF Jung, JJ Lee, N DePalma… – Proceedings of the …, 2013 – dl.acm.org Page 1. ABSTRACT People are increasingly working with robots in teams and recent research has focused on how human-robot teams function, but little attention has yet been paid to the role of social signaling behavior in human-robot teams. … Cited by 4 Related articles All 4 versions

Novel mechanisms for natural human-robot interactions in the diarc architecture M Scheutz, G Briggs, R Cantrell, E Krause… – Proceedings of AAAI …, 2013 – aaai.org … In Proceedings of the 2012 Conference on Social Robotics, LNCS, 238–247. Springer. Briggs, G., and Scheutz, M. 2012b. Multi-modal belief updates in multi-robot human-robot dialogue interactions. In Proceedings of AISB 2012. Briggs, G., and Scheutz, M. 2013. … Cited by 5 Related articles All 3 versions

Modeling dynamic spatial relations with global properties for natural language-based human-robot interaction J Fasola, MJ Mataric – RO-MAN, 2013 IEEE, 2013 – ieeexplore.ieee.org Page 1. Abstract— We present a methodology for the representation of dynamic spatial relations (DSRs) with global properties as part of an approach for enabling robots to follow natural language commands from non-expert … Cited by 3 Related articles All 4 versions

A dialogue system for multimodal human-robot interaction L Lucignano, F Cutugno, S Rossi, A Finzi – Proceedings of the 15th ACM …, 2013 – dl.acm.org Page 1. A Dialogue System for Multimodal Human-Robot Interaction Lorenzo Lucignano, Francesco Cutugno, Silvia Rossi, Alberto Finzi DIETI, Univ. di Napoli “Federico II” Via Claudio 21, I-80125, Napoli, Italy {lor.lucignano,cutugno,silvia.rossi,alberto.f nzi}@unina.it ABSTRACT … Cited by 2 Related articles

The Furhat Back-Projected Humanoid Head–Lip Reading, Gaze And Multi-Party Interaction SAL MOUBAYED, G Skantze… – International Journal of …, 2013 – World Scientific Page 1. THE FURHAT BACK-PROJECTED HUMANOID HEAD{LIP READING, GAZE AND MULTI-PARTY INTERACTION SAMER AL MOUBAYED*, GABRIEL SKANTZE† and JONAS BESKOW‡ Department of Speech, Music … Cited by 12 Related articles All 4 versions

How a robot should give advice C Torrey, SR Fussell, S Kiesler – Human-Robot Interaction (HRI …, 2013 – ieeexplore.ieee.org … Since then, research in human-robot dialogue for giving help has come a long way, although there continue to be important problems in speech misrecognition, … Robotics, Science, and Systems, Grounding Human-Robot Dialog for Spatial Tasks workshop. … Cited by 5 Related articles All 2 versions

Linguistic encoding of motion events in robotic system M Gnjatovi?, J Tasevski, D Miškovi?… – Proc. 6th PSU-UNS …, 2013 – psu-uns2013.com … of Technical Sciences Abstract: This paper reports and discusses an implementation of a cognitively-inspired and computationally appropriate linguistic encoding of motion events in human-robot dialogue. The proposed encoding … Cited by 4 Related articles All 2 versions

Expressing ethnicity through behaviors of a robot character M Makatchev, R Simmons, M Sakr… – Proceedings of the 8th …, 2013 – dl.acm.org … through verbal and nonverbal behaviors and of achieving the homophily effect. Keywords—human-robot dialogue; ethnicity; homophily. I. INTRODUCTION Individuals tend to associate disproportionally with others who are … Cited by 4 Related articles All 5 versions

Position-invariant, real-time gesture recognition based on dynamic time warping S Bodiroža, G Doisy, VV Hafner – Proceedings of the 8th ACM/IEEE …, 2013 – dl.acm.org … IEEE Conf. on Comput. Vision and Pattern Recognition, Colorado Springs, CO, USA, Jun. 2011, pp. 1297– 1304. [5] S. Bodiroza, HI Stern, and Y. Edan, “Dynamic gesture vocabulary design for intuitive human-robot dialog,” in Proc. Annu. ACM/IEEE Int. Conf. … Cited by 7 Related articles All 2 versions

An argumentation-based dialogue system for human-robot collaboration MQ Azhar, S Parsons, E Sklar – … of the 2013 international conference on …, 2013 – dl.acm.org … In- quiry and information-seeking dialogues could be employed to resolve robot errors due to miscommunication [3]. Current research on human-robot dialogue primarily ad- dresses the “how to say it” and “when to say it” problems. … Cited by 1 Related articles All 7 versions

Knowledge representation for robots through human-robot interaction E Bastianelli, D Bloisi, R Capobianco… – arXiv preprint arXiv: …, 2013 – arxiv.org … level sensors. The user role throughout the acquisition process is to support the robot in place labeling. However, once achieved, the conceptual representation is also useful for effective human-robot dialogue. Pronobis and … Cited by 3 Related articles All 3 versions

Human-Robot Collaborative Assembly by On-line Human Action Recognition Based on an FSM Task Model H Goto, J Miura, J Sugiyama – Human-Robot Interaction 2013 …, 2013 – cs.cmu.edu … In Proceedings of AIAA 1st Intelligent Systems Technical Conf., 2004. [8] ME Foster and C. Matheson. Following Assembly Plans in Cooper- ative, Task-Based Human-Robot Dialogue. In Proceedings of the 12th Workshop on the Semantics and Pragmatics of Dialogue, 2008. … Cited by 4 Related articles All 3 versions

Automatic processing of irrelevant co-speech gestures with human but not robot actors CJ Hayes, CR Crowell, LD Riek – Proceedings of the 8th ACM/IEEE …, 2013 – dl.acm.org Page 1. Automatic Processing of Irrelevant Co-Speech Gestures with Human but not Robot Actors Cory J. Hayes1, Charles R. Crowell2, and Laurel D. Riek1 1 Department of Computer Science and Engineering 2 Department … Cited by 3 Related articles All 3 versions

Incrementally biasing visual search using natural language input E Krause, R Cantrell, E Potapova, M Zillich… – Proceedings of the …, 2013 – dl.acm.org Page 1. Incrementally Biasing Visual Search Using Natural Language Input Evan Krause1, Rehj Cantrell2, Ekaterina Potapova3, Michael Zillich3, Matthias Scheutz1 Tufts University1 Medford, MA USA {ekrause, mscheutz}@cs.tufts.edu … Cited by 4 Related articles All 5 versions

Linking cognitive tokens to biological signals: Dialogue context improves neural speech recognizer performance R Veale, G Briggs, M Scheutz – … of the 35th Annual Conference of the …, 2013 – hrilab.tufts.edu … Portland, Oregon. Briggs, G., & Scheutz, M. (2012). Multi-modal belief up- dates in multi-robot human-robot dialogue interaction. In Proceedings of 2012 symposium on linguistic and cognitive approaches to dialogue agents. Briggs, G., & Scheutz, M. (2013, forthcoming). … Cited by 2 Related articles All 3 versions

A Case for Argumentation to Enable Human-Robot Collaboration E Sklar, MQ Azhar, T Flyr, S Parsons – … . St Paul, MN, USA (May 2013 … – sci.brooklyn.cuny.edu … This type of dialogue, which pro- motes dynamic exchange of ideas, does not exist in today’s human-robot systems. Indeed the primary focus in human-robot dialogue is on the method of delivery, while the content is typically chosen from scripted sequences. … Cited by 1 Related articles All 3 versions

A Data-driven Model for Timing Feedback in a Map Task Dialogue System R Meena, G Skantze, J Gustafson – 14th Annual Meeting of the Special …, 2013 – aclweb.org Page 1. Proceedings of the SIGDIAL 2013 Conference, pages 375–383, Metz, France, 22-24 August 2013. c 2013 Association for Computational Linguistics A Data-driven Model for Timing Feedback in a Map Task Dialogue System … Cited by 3 Related articles All 7 versions

Generation Of Nodding, Head Tilting And Gazing For Human–Robot Speech Interaction C Liu, CT Ishi, H Ishiguro, N Hagita – International Journal of …, 2013 – World Scientific … We plan to evaluate the e®ect of face up motion on other types of robots in the future. 4. Evaluation of Head Motion and Eye Gazing During Human{Robot Dialogue Interaction The results from the experiment in Sec. 2 left questions unanswered. … Cited by 1 Related articles All 2 versions

Semantic management of human-robot interaction in ambient intelligence environments using N-ary ontologies N Ayari, A Chibani, Y Amirat – Robotics and Automation (ICRA), …, 2013 – ieeexplore.ieee.org … Representation Language (NKRL) framework with semantic modules to allow on one hand, converting robot interactions into formal n?ary semantic annotations, and on the other hand, making semantic infer- ences for: (i) driving the human-robot dialogue, (ii) inferring the spatio … Cited by 1 Related articles All 3 versions

Knowledgeable talking robots LC Aiello, E Bastianelli, L Iocchi, D Nardi… – Artificial General …, 2013 – Springer … 1, pp. 86–90. Association for Computational Linguistics, Stroudsburg (1998) 4. Bannat, A., Blume, J., Geiger, JT, Rehrl, T., Wallhoff, F., Mayer, C., Radig, B., Sosnowski, S., Kühnlenz, K.: A multimodal human-robot-dialog applying emotional feedbacks. … Cited by 2 Related articles All 4 versions

Towards Situated Dialogue: Revisiting Referring Expression Generation. R Fang, C Liu, L She, JY Chai – EMNLP, 2013 – cs.msu.edu … dialogue. 1 Introduction Situated human robot dialogue has received increas- ing attention in recent years. In situated dialogue, robots/artificial agents and their human partners are co-present in a shared physical world. Robots … Cited by 2 Related articles All 5 versions

Single assembly robot in search of human partner: Versatile grounded language generation RA Knepper, S Tellex, A Li, N Roy, D Rus – Proceedings of the 8th ACM/ …, 2013 – dl.acm.org … In Proc. AAAI, 2011. [5] S. Tellex, P. Thaker, R. Deits, T. Kollar, and N. Roy. Toward information theoretic human-robot dialog. In Proceedings of Robotics: Science and Systems, Sydney, Australia, July 2012. 978-1-4673-3101-2/13/$31.00 © 2013 IEEE … Cited by 2 Related articles All 4 versions

Modeling the human blink: A computational model for use within human–robot interaction CC Ford, G Bugmann, P Culverhouse – International Journal of …, 2013 – World Scientific Page 1. MODELING THE HUMAN BLINK: A COMPUTATIONAL MODEL FOR USE WITHIN HUMAN{ROBOT INTERACTION CC FORD Center for Robotics and Neural Systems, University of Plymouth, Room B106, Portland Square … Cited by 1 Related articles All 4 versions

An Attention-Directed Robot for Social Telepresence R Yan, KP Tee, Y Chua, Z Huang, H Li – 2013 – oar.a-star.edu.sg … This can be achieved by audiovisual integration based on scene understanding and position information, as shown in several works that aim to make human-robot dialog more natural and flexible [2, 3, 4]. By adding short term memory, people can be tracked even if they … Cited by 2 Related articles All 3 versions

A short review of symbol grounding in robotic and intelligent systems S Coradeschi, A Loutfi, B Wrede – KI-Künstliche Intelligenz, 2013 – Springer Page 1. Künstl Intell (2013) 27:129–136 DOI 10.1007/s13218-013-0247-2 TECHNICAL CONTRIBUTION A Short Review of Symbol Grounding in Robotic and Intelligent Systems Silvia Coradeschi · Amy Loutfi · Britta Wrede Received … Cited by 5 Related articles All 6 versions

On the Many Interacting Flavors of Planning for Robotics K Talamadupula, M Scheutz, G Briggs, S Kambhampati – 2013 – tahoma.eas.asu.edu … [BriggsandScheutz2012] Briggs, G., and Scheutz, M. 2012. Multi-modal belief updates in multi-robot human- robot dialogue interaction. In Proceedings of 2012 Sympo- sium on Linguistic and Cognitive Approaches to Dialogue Agents. … Cited by 1 Related articles All 8 versions

“You two! Take off!”: Creating, modifying and commanding groups of robots using face engagement and indirect speech in voice commands S Pourmehr, VM Monajjemi… – Intelligent Robots and …, 2013 – ieeexplore.ieee.org … Kluwer, 2002, pp. 16–20. [7] G. Briggs and M. Scheutz, “Multi-modal belief updates in multi-robot human-robot dialogue interaction,” in Proc. of 2012 Symposium on Linguistic and Cognitive Approaches to Dialogue Agents, 2012, pp. 67–72. … Cited by 7 Related articles All 7 versions

Towards cooperative bayesian human-robot perception: Theory, experiments, opportunities N Ahmed, E Sample, TL Yang, D Lee… – Workshops at the …, 2013 – aaai.org … of AIAA GNC 2011, Portland, OR. Tellex, S., Thaker, P., Deits, R., Kollar, T., & Roy, N. 2012. Toward information theoretic human-robot dialog. Proceedings of Robotics: Science and Systems, Sydney, Australia. Kaupp, T., & Makarenko, A. 2008. … Cited by 1 Related articles All 2 versions

Picking favorites: The influence of robot eye-gaze on interactions with multiple users DE Karreman, GUS Bradford… – … Robots and Systems …, 2013 – ieeexplore.ieee.org … Mutlu et al. [13] stated gaze direction is frequently non-verbal leakage of a person’s focus or interest and they showed that robot gaze cues for humanoid and non-humanoid robots are effective in a human-robot dialogue. They … Cited by 2 Related articles All 5 versions

Human Evaluation of Conceptual Route Graphs for Interpreting Spoken Route Descriptions R Meena, G Skantze, J Gustafson – Proceedings of the 3rd …, 2013 – ling.uni-potsdam.de … 19-27). Los Angeles, CA. 3. Meena, R., Skantze, G., & Gustafson, J. (2012). A Data-driven Approach to Understanding Spoken Route Directions in Human-Robot Dialogue. Interspeech. Portland, OR. 4. Bugmann, G., Klein, E., Lauria, S., & Kyriacou, T. (2004). … Cited by 1 Related articles All 7 versions

Multi-Modal Conversational Search and Browse. LP Heck, D Hakkani-Tür, M Chinthakunta… – SLAM@ …, 2013 – msr-waypoint.com … Computer Graphics, vol. 14, no. 3, pp. 262, 1980. [3] G. Taylor, R. Frederiksen, J. Crossman, J. Voigt, and K. Aron, “A smart interaction device for multi-modal human-robot dialogue,” Ann Arbor, pp. 190–191, 2012. [4] R. Balchandran … Cited by 4 Related articles All 6 versions

Developing a tactical language for future robotic teammates E Phillips, J Rivera, F Jentsch – Proceedings of the Human Factors …, 2013 – pro.sagepub.com … 333-380). San Francisco: Jossey-Bass. Deits, R., Tellex, S., Thaker, P., Simeonov, D., Kollar, T., & Roy, N. (2013). Clarifying commands with information- theoretic human-robot dialog. Journal of Human-Robot Interaction, 2(2), 58-79. … Cited by 1 Related articles

Knowing when we don’t know: Introspective classification for mission-critical decision making H Grimmett, R Paul, R Triebel… – Robotics and Automation …, 2013 – ieeexplore.ieee.org … as the detection of ground traversability (eg [9]), the detection of lanes for autonomous driving (eg [10]), the consideration of classifier output to guide trajectory planning and exploration (see, for example, [11], [12]) or the active disambiguation of human-robot dialogue [13]. … Cited by 6 Related articles

Shared Gaze in Situated Referential Grounding: An Empirical Study C Liu, R Fang, JY Chai – Eye Gaze in Intelligent User Interfaces, 2013 – Springer … As a new generation of robots start to emerge into our daily life, techniques that enable situated human robot dialogue have become increasingly important (Bohus and Horvitz 2009 ). Human robot dialogue often involves objects and their identities in the environment. … Cited by 1 Related articles All 3 versions

On-line semantic mapping E Bastianelli, DD Bloisi, R Capobianco… – … (ICAR), 2013 16th …, 2013 – ieeexplore.ieee.org … Not only the user supports the robot in place labeling, but the representation is also used in human- robot dialogue. … Some of these states are used for contextual execution of some behaviors and for the human-robot dialogue. … Cited by 5 Related articles All 2 versions

Do beliefs about a robot’s capabilities influence alignment to its actions? AL Vollmer, B Wrede, KJ Rohlfing… – … and Learning and …, 2013 – ieeexplore.ieee.org … understanding (in human- robot interaction and adult-child interaction [19], [20], [21]). Human alignment in human-robot dialog clearly appears to be beneficial. Consider an example of alignment of choice of words in a conversation … Cited by 1 Related articles All 4 versions

Explicit knowledge and the deliberative layer: Lessons learned S Lemaignan, R Alami – … and Systems (IROS), 2013 IEEE/RSJ …, 2013 – ieeexplore.ieee.org … Two important remarks: because HATP is a generic symbolic task planner, we have been able to design a planning domain at a semantic level which is close to the one used in the human-robot dialogue (the planner vocabulary contains concepts like give, table, is on…). … Cited by 1 Related articles All 2 versions

Kernel-based discriminative re-ranking for spoken command understanding in hri R Basili, E Bastianelli, G Castellucci, D Nardi… – AI* IA 2013: Advances in …, 2013 – Springer Page 1. Kernel-Based Discriminative Re-ranking for Spoken Command Understanding in HRI Roberto Basili 1 , Emanuele Bastianelli 2 , Giuseppe Castellucci 3 , Daniele Nardi 4 , and Vittorio Perera 4 1 Dept. of Enterprise Engineering, 2 Dept. … Cited by 2 Related articles All 2 versions

Increasing Helpfulness towards a Robot by Emotional Adaption to the User B Kühnlenz, S Sosnowski, M Buß, D Wollherr… – International Journal of …, 2013 – Springer … 3.1 Explicit Emotional Adaption Independent of the interactive goal which is expressed later during task-related human-robot dialog, the idea is to im- plement some small talk to open the dialog and thereby monitor the current mood or other personal attitudes of the Page 6. 462 … Cited by 1 Related articles All 2 versions

Introspective Active Learning for Scalable Semantic Mapping R Triebel, H Grimmett, R Paul… – Workshop on Active …, 2013 – vision.cs.tum.edu … Recent work by Tellex et al. [16] explores active information gathering for human-robot dialog. … [16] Stefanie Tellex, Pratiksha Thaker, Robin Deits, Thomas Kollar, and Nicholas Roy. Toward information theoretic human-robot dialog. In Robotics: Science and Systems, 2012. … Cited by 2 Related articles All 5 versions

Model of human clothes based on saliency maps S Hommel, D Malysiak… – … and Informatics (CINTI), …, 2013 – ieeexplore.ieee.org … In this way, the more complex saliency maps based features are only calculated for the searched person and a few hypotheses. A. General Feature The used general features are basically described in [1] for a human robot dialog system. … Cited by 1 Related articles All 5 versions

I Would Like Some Food: Anchoring Objects to Semantic Web Information in Human-Robot Dialogue Interactions A Persson, S Coradeschi, B Rajasekaran, V Krishna… – Social Robotics, 2013 – Springer Abstract Ubiquitous robotic systems present a number of interesting application areas for socially assistive robots that aim to improve quality of life. In particular the combination of smart home environments and relatively inexpensive robots can be a viable technological … Cited by 1 Related articles All 5 versions

Towards evaluating recovery strategies for situated grounding problems in human-robot dialogue M Marge, AI Rudnicky – RO-MAN, 2013 IEEE, 2013 – ieeexplore.ieee.org Abstract—Robots can use information from their surroundings to improve spoken language communication with people. Even when speech recognition is correct, robots face challenges when interpreting human instructions. These situated grounding problems … Related articles All 2 versions

Techniques for Real-time Multi-person Face Tracking for Human-robot Dialogue Z Katibeh – 2013 – digitalamedier.bth.se Abstract: The aim of this work is an investigation of interaction between a robot and multiple humans. The head robot is “FurHat”-in Speech, Music and Hearing (TMH) department in KTH-and the most focus is on real-time tracking algorithms. The study contains two parts: … Related articles All 4 versions

Task-based evaluation of context-sensitive referring expressions in human–robot dialogue ME Foster, M Giuliani, A Isard – Language and Cognitive Processes, 2013 – Taylor & Francis The standard referring-expression generation task involves creating stand-alone descriptions intended solely to distinguish a target object from its context. However, when an artificial system refers to objects in the course of interactive, embodied dialogue with a … Related articles

Towards Metareasoning for Human-Robot Interaction X Chen, Z Sui, J Ji – Intelligent Autonomous Systems 12, 2013 – Springer … in many aspects and thus they should help each other in order to fulfill better services for humans [14,6,8]. One means to this end, which has drawn increasing interest recently, is to make robots capable of asking humans for help through human-robot dialogue [5,13,11]. … Related articles All 5 versions

Development of a taxonomy to improve human-robot-interaction through multimodal robot feedback N Mirnig – CHI’13 Extended Abstracts on Human Factors in …, 2013 – dl.acm.org … In Proc. CHI 2009, ACM (2009), 3769- 3774. [11] Liu, C., Ishi, C. Ishiguro, H., and Hagita, N. Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction. In Proc. HRI 2012, IEEE (2012), 285-292. … Related articles

Enabling human-robot collaboration via argumentation E Sklar, MQ Azhar, T Flyr, S Parsons – Proceedings of the 2013 …, 2013 – dl.acm.org … The robot is only allowed to utter locutions that make use of information from ?R [6]. A dialogue may affect the robot’s beliefs (b ? ?R ? ?R(H)) or actions (a ? Actions). Figures 1 and 2 show our human-robot dialogue protocols. … Related articles All 4 versions

“Talking to my robot”: From knowledge grounding to dialogue processing S Lemaignan, R Alami – … Robot Interaction (HRI), 2013 8th ACM …, 2013 – ieeexplore.ieee.org … C. Other components While focusing on symbolic knowledge et human-robot dialogue, the video demonstration is the result of the integra- tion of many different components. Besides off the shelf PR2 components (like laser-based localisation or 2D navigation), … Related articles All 4 versions

Architectural Mechanisms for Handling Human Instructions in Open-World Mixed-Initiative Team Tasks K Talamadupula, EDUG Briggs, T EDU, M Scheutz – public.asu.edu … Specifically, our robot was put in a scenario where it had to listen for and understand natural language instructions from a human teammate. In this case, the human-robot dialog was as follows: “H: Cindy, Commander Z really needs a medical kit. … Related articles All 3 versions

Role-based coordinating communication for effective human-robot task collaborations AS Clair, M Mataric – Collaboration Technologies and Systems …, 2013 – ieeexplore.ieee.org … These types of models have been widely used in robotics for planning, multi-agent coordination, and in human-robot dialog systems [5], and are also frequently used in robot learning from demonstration [6], where the model is learned from some outside source such as a human … Related articles All 4 versions

Toward a Tutorial Dialogue System for Urban Navigation C Fox, KE Boyer – illc.uva.nl … 2011. Understanding Route Directions in Human-Robot Dialogue. In SemDial, number September, pages 19–27. … 2012. A Data-driven Approach to Understanding Spoken Route Directions in Human-Robot Dialogue. In IN- TERSPEECH. … Related articles

Audio-visual attention control of a pan-tilt telepresence robot KP Tee, R Yan, Y Chua, Z Huang – Control, Automation and …, 2013 – ieeexplore.ieee.org … Audio proto objects for improved sound localization. In Proc. of IEEEIRSJ Int. Conf. on Intelligent Robots and Systems, 2009. [8] R. Yan, T. Rodemann, and B. Wrede. Simple auditory and visual features for human-robot dialog scene analysis. In Proc. … Related articles All 2 versions

Applicability of Equilibrium Theory of Intimacy to Non-Verbal Interaction with Robots: Multi-Channel Approach Using Duration of Gazing and Distance Between a … H Kamide, K Kawabe, S Shigemi, T Arai – Journal ref: Journal of Robotics …, 2013 – fujipress.jp … of the IEEE Int. Symposium on Robot and Human Interactive Communication, pp. 1022-1028, 2009. [15] C. Liu, CT Ishi, H. Ishiguro, and N. Hagita, “Generation of Nodding, Head Tilting and Eye Gazing for Human-Robot Dialogue Interaction,” Proc. of ACM/IEEE Int. Conf. … Related articles

Human–Robot Interaction A Kirsch – Computation for Humanity: Information Technology to …, 2013 – books.google.com Page 200. 8 Human–Robot Interaction Alexandra Kirsch CONTENTS 8.1 Facets of Human–Robot Interaction….. 178 8.1. 1 Closeness of Interaction….. 178 8.1. 2 Purpose … Related articles

Multimodal Fusion in Human-Agent Dialogue E André, JC Martin, F Lingenfelser… – Coverbal Synchrony in …, 2013 – books.google.com … processing and multimodal generation is required. Stiefelhagen et al. (2007) propose to allow for clarification dialogues in order to improve the accuracy of the fusion process in human-robot dialogue. Visser et al.(2012) describe an … Related articles All 3 versions

Interruptible Autonomy: Towards Dialog-Based Robot Task Management Y Sun, B Coltin, M Veloso – Workshops at the Twenty-Seventh AAAI …, 2013 – aaai.org … In Proc. IEEE/RSJ In- ternational Conference on Intelligent Robots and Systems (IROS), 5990–5995. Kollar, T.; Perera, V.; and Veloso, M. 2013. Learning envi- ronmental knowledge from task-based human-robot dialog. In International Conference on Robotics and Automation. … Related articles All 10 versions

KeJia: The Integrated Intelligent Robot for RoboCup@ Home 2013 X Chen, F Wang, H Sun, J Xie, M Cheng, K Chen – staff.science.uva.nl … Finally, it needs the capability of learning from its experience and humans and thus reach a higher performance; specifically, we hope the robot can acquire general knowledge through the human robot dialogue and other sources such as open knowlege bases. … Related articles All 4 versions

A Software Framework for Multi-Robot Human Interaction J Sattar, M Grimson, J Little – icra2013mrs.tuebingen.mpg.de … dictated from a human user. Our broader goal is to reduce uncertainty in human-robot dialog by providing a robust assessment of risk by evaluating the command through the relevant network components. To that goal, we rely … Related articles All 2 versions

A Multi-view camera-projector system for object detection and robot-human feedback J Shen, J Jin, N Gans – Robotics and Automation (ICRA), 2013 …, 2013 – ieeexplore.ieee.org Page 1. A Multi-View Camera-Projector System for Object Detection and Robot-Human Feedback Jinglin Shen, Jingfu Jin and Nicholas Gans Abstract— In this paper, we present a novel camera-projector system for assisting robot-human interaction. … Related articles All 4 versions

Simulation Competitions on Domestic Robots J Ji, Z Sui, G Jin, J Xie, X Chen – RoboCup 2012: Robot Soccer World Cup …, 2013 – Springer … an object is portable, etc. Human-robot dialogue is simulated in a simplified way, by sending to each competing program a list of testing problems expressed in some verbal languages. The competing programs are required … Related articles All 5 versions

Pursuing and Demonstrating Understanding in Dialogue D DeVault, M Stone – cs.rutgers.edu Page 1. 1 Pursuing and Demonstrating Understanding in Dialogue David DeVault and Matthew Stone University of Southern California and Rutgers University 1.1 Introduction The appeal of dialogue as an interface modality … Related articles All 3 versions

Screen feedback: How to overcome the expressive limitations of a social robot N Mirnig, YK Tan, BS Han, H Li… – RO-MAN, 2013 …, 2013 – ieeexplore.ieee.org … IEEE, 2005, pp. 708–713. [8] C. Liu, C. Ishi, H. Ishiguro, and N. Hagita, “Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction,” in Proc. of 7th International Conference on Human-Robot Interaction. IEEE, 2012, pp. 285–292. … Related articles

Crowdsourcing for Spoken Dialog System Evaluation GAL Zhaojun Yang, H Meng – Crowdsourcing for Speech …, 2013 – books.google.com … crowdsourced reply is used as the system response. DePalma et al.(2011) provide an example of the use of crowdsourcing to develop models of human-robot dialog (HRI). They describe the use of crowdsourced interactions … Related articles All 3 versions

A dialogue management system using a corpus-based framework and a dynamic dialogue transition model S Kang, Y Ko, J Seo – AI Communications, 2013 – IOS Press Page 1. AI Communications 26 (2013) 145–159 145 DOI 10.3233/AIC-130552 IOS Press A dialogue management system using a corpus-based framework and a dynamic dialogue transition model Sangwoo Kanga, Youngjoong … Related articles All 4 versions

A Multimodal Emotion Detection System during Human–Robot Interaction F Alonso-Martín, M Malfaz, J Sequeira, JF Gorostiza… – Sensors, 2013 – mdpi.com … communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each … Cited by 1 Related articles All 7 versions

Attention and Emotion Based Adaption of Dialog Systems S Hommel, A Rabie, U Handmann – Intelligent Systems: Models and …, 2013 – Springer … Due to the growing occurrence of service robots more and more unexperienced and non-instructed users are getting in touch with service robots. Therefore a lot of effort has been spent on enabling a natural human-robot dialog in service robotics during the last years. … Related articles All 5 versions

An intelligent service system with multiple robots Q Lu, G Lu, A Bai, D Zhang, X Chen – staff.ustc.edu.cn … For more information, the reader is referred to the team de- scription paper of WrightEagle@Home for the competition RoboCup@Home 2013. Dialogue Understanding The Human-Robot Dialogue module provides the interface for communication between users and the robot. … Related articles All 3 versions

Using Embodied Multimodal Fusion to Perform Supportive and Instructive Robot Roles in Human-Robot Interaction M Giuliani, A Knoll – International Journal of Social Robotics, 2013 – Springer Page 1. Int J Soc Robot (2013) 5:345–356 DOI 10.1007/s12369-013-0194-y Using Embodied Multimodal Fusion to Perform Supportive and Instructive Robot Roles in Human-Robot Interaction Manuel Giuliani · Alois Knoll Accepted … Cited by 1 Related articles All 5 versions

LaSTIC Laboratory, Computer Science Department, University of Batna, 05000 Algeria T Toumi, A Zidani – … and Collective Behaviors in Robotics (ICBR …, 2013 – ieeexplore.ieee.org … 1554-1559. [3] M. Rickert, ME Foster, M. Giuliani, T. By, G. Panin, and A. Knoll, “Integrating language, vision and action for human robot dialog systems,” in Universal Access in Human-Computer Interaction. Ambient Interaction, ed: Springer, 2007, pp. 987-995. … Related articles

POMDP-Based Interaction and Interactive Natural Language Grounding with a NAO Robot MB Forbes – 2013 – homes.cs.washington.edu Page 1. POMDP-Based Interaction and Interactive Natural Language Grounding with a NAO Robot by Maxwell B. Forbes Submitted to the Department of Computer Science and Engineering in partial fulfillment of the requirements for the degree of … Related articles All 2 versions

Symbol Grounding as Social, Situated Construction of Meaning in Human-Robot Interaction GJM Kruijff – KI-Künstliche Intelligenz, 2013 – Springer … PhD thesis, Faculty of Mathematics and Physics, Charles Univer- sity, Prague, Czech Republic, April 2001 41. Kruijff GJM, Brenner M (2007) Modelling spatio-temporal com- prehension in situated human-robot dialogue as reasoning about intentions and plans. … Related articles All 3 versions

A Practical Approach to Human/Multi-Robot Teams E Sklar, S Parsons, AT Ozgelen, MQ Azhar, T Flyr… – irit.fr … robot interaction (HRI) systems. The current fo- cus in most human-robot dialogue work is on natural language architectures [25] or delivery methods [18,22,31,43], rather than dynamic content selection. For human-robot systems … Related articles

A precursory look at potential interaction objectives affecting flexible robotic cell safety A Savoy, A McLeod – Human Interface and the Management of Information. …, 2013 – Springer … Research in HRI risk assessment has identified many factors that could be hazardous to humans within a robot’s workspace [7,3,8]. The derivation of these factors stems from four general areas, namely: robot self-awareness, robot self-reliance, human-robot dialog, and robot … Related articles All 2 versions

Rhetorical robots: making robots more effective speakers using linguistic cues of expertise S Andrist, E Spannan, B Mutlu – Proceedings of the 8th ACM/IEEE …, 2013 – dl.acm.org … improvements. These results have implications for the development of effective dialogue strategies for informational robots. Index Terms—Human-robot dialogue; robot speech; linguistic cues of expertise; rhetorical ability; persuasion. I … Cited by 1 Related articles All 4 versions

Knowledge-Based Reasoning on Semantic Maps R Capobianco, G Gemignani, D Nardi, D Bloisi… – 2013 – redwood.cs.ttu.edu … The user role, during the acquisi- tion process, is to support the robot in the activity of place labeling, while the obtained representation is also used for human-robot dialogue. A more general approach to human-robot collaboration for semantic mapping is taken by (Kruijff et al. … Related articles All 3 versions

Learning Through Multi-Modal HRI PhD Program Work Plan G Gemignani – 2013 – dis.uniroma1.it … low-level sensors. The user role throughout the acquisition process is exploited to support the in place robot labelling; the obtained conceptual representation is later used for effective human-robot dialog. [12] present instead … Related articles

Continuous multi-modal human interest detection for a domestic companion humanoid robot J Chen, WJ Fitzgerald – Advanced Robotics (ICAR), 2013 16th …, 2013 – ieeexplore.ieee.org … IEEE, 2004, vol. 3, pp. 2422–2427. [2] M. Rickert, M. Foster, M. Giuliani, T. By, G. Panin, and A. Knoll, “Integrating language, vision and action for human robot dialog systems,” Universal Access in Human-Computer Interaction. Ambient Interaction, pp. 987–995, 2007. … Related articles

[BOOK] Robotics N Roy, P Newman, S Srinivasa – 2013 – books.google.com … Anthony Cowley….. 401 Toward Information Theoretic Human-Robot Dialog Stefanie Tellex, Pratiksha Thaker, Robin Deits, Dimitar Simeonov, Thomas Kollal; and Nicholas Roy….. 409 Ef?ciently … All 2 versions

An OpenCCG-Based Approach to Question Generation from Concepts MM Berg, A Isard, JD Moore – Natural Language Processing and …, 2013 – Springer … This is in line with the results of Dautenhahn et al. [7], who also found that 71% of people wish for a human- like communication with robots. Looi and See [14] describe the stereotype of human-robot dialogue as being monotonous and inhumane. … Related articles All 4 versions

Embodying Care in Matilda: An Affective Communication Robot for Emotional Wellbeing of Older People in Australian Residential Care Facilities R Khosla, MT Chu – ACM Transactions on Management Information …, 2013 – dl.acm.org Page 1. i i i i 18 Embodying Care in Matilda: An Affective Communication Robot for Emotional Wellbeing of Older People in Australian Residential Care Facilities RAJIV KHOSLA and MEI-TAI CHU, La Trobe University Ageing … Related articles

Active-Speaker Detection and Localization with Microphones and Cameras Embedded into a Robotic Head J Cech, R Mittal, A Deleforge, J Sanchez-Riera… – … on Humanoid Robots, 2013 – hal.inria.fr Page 1. Active-Speaker Detection and Localization with Microphones and Cameras Embedded into a Robotic Head Jan Cech*, Ravi Mittal, Antoine Deleforge, Jordi Sanchez-Riera, Xavier Alameda-Pineda and Radu Horaud … Related articles All 9 versions

Development of minimalist bipedal walking robot with flexible ankle and split-mass balancing systems HS Jo, N Mir-Nasiri – International Journal of Automation and Computing, 2013 – Springer Page 1. International Journal of Automation and Computing 10(5), October 2013, 425-437 DOI: 10.1007/s11633-013-0739-4 Development of Minimalist Bipedal Walking Robot with Flexible Ankle and Split-mass Balancing Systems Hudyjaya Siswoyo Jo1 Nazim Mir-Nasiri2 … Related articles All 8 versions

Driven Learning for Driving: How Introspection Improves Semantic Mapping R Triebel, H Grimmett, R Paul, I Posner – robots.ox.ac.uk … linked samples leading to an improved workspace representation. Recent work by Tellex et al. [17] explores active infor- mation gathering for human-robot dialog. The authors formulate an information- theoretic strategy for asking … Related articles

Ya-Kun Zhu, Xin-Ping Guan, Xiao-Yuan Luo ????? ????? ????? ???? – International Journal, 2013 – ijac.net … [24], CR Liu, CT Ishi, H. Ishiguro, N. Hagita. Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction. In Proceedings of the 7th Annual ACM/IEEE International Conference on Human-Robot Interaction, ACM, New York, USA, pp.285-292, 2012. … All 2 versions

Route description interpretation on automatically labeled robot maps C Landsiedel, R de Nijs, K Kuhnlenz… – … (ICRA), 2013 IEEE …, 2013 – ieeexplore.ieee.org … 30, no. 6, pp. 755–771, 2011. [5] M. Johnson-Roberson, J. Bohg, G. Skantze, J. Gustafson, R. Carlson, B. Rasolzadeh, and D. Kragic, “Enhanced visual scene understanding through human-robot dialog,” in Proc. of the IEEE Int. Conf. … Related articles All 2 versions

AI’s 10 to Watch D Zeng – Intelligent Systems, IEEE, 2013 – ieeexplore.ieee.org … Methods based on information-theoretic human-robot dialog enable a robot to use ordinary language to explain what it needs to an untrained person. The human provides help that enables the robot to recover from its failure and continue operating autonomously. … Cited by 1 All 6 versions

Translating Action Knowledge into High-Level Semantic Representations for Cognitive Robots J Xie, X Chen, Z Sui – Nonmonotonic Reasoning, Action and Change – ustc.edu.cn … [Tenorth et al., 2010] propose extracting the action knowledge from the natural language instructions from the World Wide Web. [Chen et al., 2010] demonstrate the high-level cognitive functions for a robot to acquire knowledge from human-robot dialog. … Related articles All 5 versions

Generalized Grounding Graphs: A Probabilistic Framework for Understanding Grounded Language T Kollar, S Tellex, MR Walter, A Huang… – Journal of Artificial …, 2013 – people.csail.mit.edu Page 1. Journal of Artificial Intelligence Research (2013) Submitted 5/13; published Generalized Grounding Graphs: A Probabilistic Framework for Understanding Grounded Language Thomas Kollar1, Stefanie Tellex1, Matthew … Cited by 1 Related articles All 3 versions

Touch and Speech: Multimodal Interaction for Elderly Persons C Jian, H Shi, F Schafmeister, C Rachuy… – Biomedical Engineering …, 2013 – Springer Page 1. J. Gabriel et al. (Eds.): BIOSTEC 2012, CCIS 357, pp. 385–400, 2013. © Springer-Verlag Berlin Heidelberg 2013 Touch and Speech: Multimodal Interaction for Elderly Persons Cui Jian1, Hui Shi1, Frank Schafmeister2 … Related articles

Learning Actions and Verbs from Situated Interactive Instruction for Embodied Cognitive Agents S Mohan – Learning, 2013 – shiwali.me … 7 3.1 Embodied Language Comprehension . . . . . 7 3.2 Human Robot Dialog . . . . . 7 … e virtual nature of the environment simplifies the challenge of referential comprehension (R1). 3.2 Human Robot Dialog … Related articles

Computational Audiovisual Scene Analysis in Online Adaptation of Audio-Motor Maps R Yan, T Rodemann, B Wrede – 2013 – ieeexplore.ieee.org Page 1. Copyright (c) 2013 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. … Related articles All 2 versions

Final Pedestrian Behaviour Component A Albore, J Boye, M Fredriksson, J Gotze, J Gustafson… – 2013 – spacebook-project.eu Page 1. D2.3.2: Final Pedestrian Behaviour Component Alexandre Albore, Johan Boye, Morgan Fredriksson, Jana G¨otze, Joakim Gustafson, J¨urgen K¨onigsmann Distribution: Public SpaceBook Spatial & Personal Adaptive … Related articles

Learning semantic maps from natural language descriptions MR Walter, S Hemachandra, B Homberg, S Tellex… – 2013 – dspace.mit.edu Page 1. Learning Semantic Maps from Natural Language Descriptions Citation Walter, Matthew R., Sachithra Hemachandra, Bianca Homberg, Stefanie Tellex, and Seth Teller. “Learning Semantic Maps from Natural Language … Cited by 8 Related articles All 7 versions

Designing speech-based interfaces for telepresence robots for people with disabilities KM Tsui, K Flynn, A McHugh… – … (ICORR), 2013 IEEE …, 2013 – ieeexplore.ieee.org Page 1. Designing Speech-Based Interfaces for Telepresence Robots for People with Disabilities Katherine M. Tsui, Kelsey Flynn, Amelia McHugh, and Holly A. Yanco University of Massachusetts Lowell Lowell, MA 01854 Email: {ktsui, kflynn, amchugh, holly}@cs.uml.edu … Cited by 3 Related articles All 4 versions

Annotation and Classification of Changes of Involvement in Group Conversation R Bock, S Gluge, I Siegert… – Affective Computing and …, 2013 – ieeexplore.ieee.org … 2159–2162, ISCA. [10] TK Harris and AI Rudnicky, “Teamtalk: A platform for multi-human-robot dialog research in coherent real and virtual spaces,” in Proceedings of the National Conference on Artificial Intelligence, Vancouver, Canada, 2007, vol. 2, pp. 1864–1865, AAAI. … Related articles All 5 versions

Knowing When We Don’t Know: Introspective Classification for Mission-Critical Decision Making HGRPR Triebel, I Posner – robots.ox.ac.uk … as the detection of ground traversability (eg [9]), the detection of lanes for autonomous driving (eg [10]), the consideration of classifier output to guide trajectory planning and exploration (see, for example, [11], [12]) or the active disambiguation of human-robot dialogue [13]. … Related articles All 5 versions

Referring in dialogue: alignment or construction? J Viethen, R Dale, M Guhe – 2013 – Taylor & Francis Cited by 1 Related articles All 2 versions

A task-performance evaluation of referring expressions in situated collaborative task dialogues P Spanger, R Iida, T Tokunaga, A Terai… – Language resources and …, 2013 – Springer Page 1. ORIGINAL PAPER A task-performance evaluation of referring expressions in situated collaborative task dialogues Philipp Spanger • Ryu Iida • Takenobu Tokunaga • Asuka Terai • Naoko Kuriyama © Springer Science+Business Media Dordrecht 2013 … Cited by 1 Related articles All 5 versions

The Basics M Eskénazi – … for Speech Processing: Applications to Data …, 2013 – books.google.com … (2009) use a language learning game. Chernova et al.(2010) also use online games to get material for human—robot dialog research. In order to obtain speech for new synthetic voice models, Freitas et al.(2010) use a quiz game with speakers reading text from the screen. … Related articles All 3 versions

Spatiotemporal movement planning and rapid adaptation for manual interaction M Huber, A Kupferberg, C Lenz, A Knoll, T Brandt… – PloS one, 2013 – dx.plos.org PLOS ONE: an inclusive, peer-reviewed, open-access resource from the PUBLIC LIBRARY OF SCIENCE. Reports of well-performed scientific studies from all disciplines freely available to the whole world. Cited by 2 Related articles All 13 versions

To Appear In EG Bard, R Hill, ME Foster, M Arai – research.ed.ac.uk Page 1. Cover sheet for Dr Ellen Gurman Bard’s Tuning accessibility of referring expressions in situated dialogue working paper This working paper was uploaded to The Edinburgh Research Explorer on the 28th October 2013. … Related articles

Generation of effective referring expressions in situated context K Garoufi, A Koller – Language and Cognitive Processes, 2013 – Taylor & Francis Cited by 3 Related articles All 2 versions

State of the art in simulation–driven design M Karlberg, M Löfstrand, S Sandberg… – International Journal of …, 2013 – Inderscience … In 2006, Prommer (Prommer, 2006) presented a simulation-driven approach for developing procedure models for automatic strategy learning, tailored for application within task-oriented human/robot dialogue systems. Further, in 2006, Sarabia et al. … Cited by 1 Related articles All 2 versions

Resilience in High Risk Work: Analysing Adaptive Performance A Rankin – 2013 – diva-portal.org Page 1. Linköping Studies in Science and Technology Licentiate Thesis No. 1589 Resilience in High Risk Work: Analysing Adaptive Performance by Amy Rankin Department of Computer and Information Science Linköpings universitet SE-581 83 Linköping, Sweden … Related articles All 2 versions

Self-help: Seeking out perplexing images for ever improving topological mapping R Paul, P Newman – The International Journal of Robotics …, 2013 – ijr.sagepub.com … Here, data points with maximum uncertainty obtained by combin- ing the posterior mean and variance estimates are queried for labels and used to improve the classifier. The recent work by Tellex et al. (2012) explores ac- tive information gathering for human–robot dialog. … Cited by 1 Related articles All 3 versions

FlightCrew Browser: a safe browser for drivers AH López-Pineda – 2013 – dspace.mit.edu Page 1. FlightCrew Browser: A Safe Browser for Drivers by Andres Humberto L6pez-Pineda SB, Massachusetts Institute of Technology, 2012 Submitted to the Department of Electrical Engineering and Computer Science in Partial … Related articles All 3 versions

Interacting with a Self-portrait Camera Using Gestures S Chu – 2013 – iplab.cs.tsukuba.ac.jp Page 1. Interacting with a Self-portrait Camera Using Gestures Graduate School of Systems and Information Engineering University of Tsukuba July 2013 Shaowei Chu Page 2. i Abstract Most existing digital camera user interfaces place little emphasis on self-portrait options. … Related articles All 2 versions

Classification of Finger Movements for the Dexterous Hand Prosthesis Control with Surface Electromyography AH Al-Timemy, G Bugmann… – … , IEEE Journal of, 2013 – ieeexplore.ieee.org Page 1. Copyright (c) 2013 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. … Cited by 13 Related articles All 6 versions

Mission Experience: How to Model and Capture it to Enable Vicarious Learning D Andersson – 2013 – diva-portal.org Page 1. Linköping Studies in Science and Technology Licentiate Thesis No. 1582 Mission Experience: How to Model and Capture it to Enable Vicarious Learning by Dennis Andersson Department of Computer and Information … Related articles All 3 versions

Anomaly Detection and its Adaptation: Studies on Cyber-Physical Systems M Raciti – 2013 – diva-portal.org Page 1. Linköping Studies in Science and Technology Licentiate Thesis No. 1586 Anomaly Detection and its Adaptation: Studies on Cyber-Physical Systems by Massimiliano Raciti Department of Computer and Information Science … Related articles All 3 versions