Notes:
The goal of the JAMES project (Joint Action for Multimodal Embodied Social Systems) was to develop a robot that was able to interact with humans in a socially-appropriate manner and to perform tasks that required social interaction and cooperation. The project aimed to create a robot that could understand and respond to social cues, such as facial expressions, gestures, and body language, and that could engage in natural, two-way communication with humans. The JAMES robot was also designed to be able to learn from its interactions with humans and to adapt its behavior accordingly. The project aimed to advance the state of the art in social robotics and to develop technologies that could be used in a variety of applications, such as education, healthcare, and assistive technology.
Resources:
- james-project.eu .. joint action for multimodal embodied social systems, 2011-2014
See also:
Modelling state of interaction from head poses for social human-robot interaction
A Gaschler, K Huth, M Giuliani, I Kessler… – Proceedings of the …, 2012 – mediatum.ub.tum.de
… These fea- tures can be processed by pure geometry [5], by a trained 1http://www. james-project.eu artificial neural network (ANN) [21], or by other non-linear regression methods [13] … 3http://www.james-project.eu [8] A. Kendon …
Machine learning of social states and skills for multi-party human-robot interaction
ME Foster, S Keizer, Z Wang, O Lemon – Proceedings of the …, 2012 – academia.edu
Page 17. Machine Learning of Social States and Skills for Multi-Party Human-Robot Interaction Mary Ellen Foster and Simon Keizer and Zhuoran Wang and Oliver Lemon1 Abstract. We describe several forms of machine learning …
Towards action selection under uncertainty for a socially aware robot bartender
ME Foster, S Keizer, O Lemon – Proceedings of the 2014 ACM/IEEE …, 2014 – dl.acm.org
… 270435, JAMES: Joint Action for Multimodal Embodied Social Systems (james-project.eu). 6. REFERENCES [1] ME Foster, A. Gaschler, and M. Giuliani. How can I help you? Comparing engagement classification strategies for a robot bartender. In Proceedings of ICMI, 2013 …
Evaluating a social multi-user interaction model using a Nao robot
S Keizer, P Kastoris, ME Foster… – Robot and Human …, 2014 – ieeexplore.ieee.org
… Based on the current 1Interaction Lab, Heriot-Watt University, Edinburgh, UK s.keizer@hw.ac. uk 1EU FP7 project JAMES: james-project.eu Fig. 1. Nao/Kinect robot bartender … 270435, JAMES: Joint Action for Multimodal Embodied Social Systems, http://james-project.eu …
Planning for social interaction with sensor uncertainty
ME Foster, RPA Petrick – Proceedings of the ICAPS 2014 Scheduling …, 2014 – dcs.gla.ac.uk
… Key to our approach is the use of a high-level planner for action selection in the 1See www.james-project.eu for more information. Figure 1: The JAMES robot bartender robot system, in the place of a traditional interaction man- ager (Larsson and Traum 2000) …
Separating representation, reasoning, and implementation for interaction management
ME Foster, RPA Petrick – Proceedings of the Seventh …, 2016 – research.ed.ac.uk
… Acknowledgements This research was supported in part by the European Commission’s Seventh Frame- work Programme through grant no. 270435 (JAMES, james-project. eu) and grant no. 610917 (STAMINA, stamina-robot.eu) …
Using General-Purpose Planning for Action Selection in Human-Robot Interaction
RPA Petrick, ME Foster – 2016 AAAI Fall Symposium Series, 2016 – aaai.org
… al. 2012).1 1http://james-project.eu/ The … communities. Acknowledgements This research has been partially funded by the EU’s 7th Framework Programme under grant no. 270435 (JAMES, http://james-project.eu/) and grant no. 610917 …
Impact of robot actions on social signals and reaction times in HRI error situations
N Mirnig, M Giuliani, G Stollnberger, S Stadler… – … Conference on Social …, 2015 – Springer
… to assemble target objects from a wooden toy construction set, together with a human partner [4]; (c) IURO could autonomously navigate through crowded inner-city environments, while proactively approaching pedestrians to request direction 1 http://www.james-project.eu 2 http …
User evaluation of a multi-user social interaction model implemented on a Nao robot
S Keizer, P Kastoris, ME Foster, A Deshmukh… – Proceedings of the …, 2013 – dcs.gla.ac.uk
… 1 EU FP7 project JAMES: james-project.eu Page 3. 2.1 Vision module Fig. 2. Kinect-based vision system … 270435, JAMES: Joint Action for Multimodal Embodied Social Sys- tems, http://james-project.eu/. Page 10. References …
Towards a scientific foundation for engineering Cognitive Systems–A European research agenda, its rationale and perspectives
HG Stork – Biologically Inspired Cognitive Architectures, 2012 – Elsevier
… fr/). • HUMANOBS (“Humanoids that Learn Socio-Communicative Skills by Observation”, http://www.humanobs.org/). • JAMES (“Joint action for multimodal embodied social systems”, http://www.james-project.eu). • NIFTi (“Natural …
Knowledge-level planning for robot task planning and human-robot interaction
R Petrick, A Gaschler – RSS Workshop on Combining AI …, 2015 – mediatum.ub.tum.de
… In particular, scenarios are taken from the JAMES project (http://james-project.eu/), which explored social interaction and task planning in a robot bartender domain, and recent work on the STAMINA project (http://stamina-robot.eu/), which is exploring robotic object manipulation …
Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations
M Giuliani, N Mirnig, G Stollnberger, S Stadler… – Frontiers in …, 2015 – frontiersin.org
Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognise when error situations occur. We investigated the verbal and non-verbal social signals that humans show when …
Extending Knowledge-Level Contingent Planning for Robot Task Planning
R Petrick, A Gaschler – International Conference on Automated …, 2014 – mediatum.ub.tum.de
… The rest of this paper is organised as follows. We first 1See http://james-project.eu/ for more information. Page 2. present an overview of PKS and then describe three exten- sions to the basic planning system which enhance its ability to operate in robotics domains …
Case Studies in Design Informatics
R Petrick – 2015 – Citeseer
… 9 Joint Action for Multimodal Embodied Social Systems · james-project.eu • 3.5 year project (2011–2014) • Consortium of 5 European partners … 35 JAMES interaction video Image/video: fortiss GmbH http://youtu.be/8k7Pd-CbbhE http://james-project.eu …
What would you like to drink? Knowledge-level planning for a social robot bartender
R Petrick – compute.dtu.dk
JOINT ACTION FOR MULTIMODAL EMBODIED SOCIAL SYSTEMS · james-project.eu What would you like to drink … Image: fortiss GmbH • Part of the JAMES Project (http://james-project. eu/), funded by the European Commission, exploring social interaction in robotics domains …
Action Selection for Interaction Management: Opportunities and Lessons for Automated Planning
R Petrick, ME Foster – 2016 – eprints.gla.ac.uk
… Acknowledgements The research leading to these results has received funding from the European Union’s Seventh Framework Programme under grant no. 270435 (JAMES, james-project.eu) and grant no. 610917 (STAMINA, stamina-robot.eu). 4http://james-project.eu …
Extending the Knowledge-Level Approach to Planning for Social Interaction
RPA Petrick – research.ed.ac.uk
… about uncertain speech hypotheses. This work is part of the JAMES project (Joint Action for Multimodal Embodied Social Systems; see james-project.eu). Figure 1: The JAMES robot bartender Knowledge-Level Planning with …
Conclusion and Future Research Directions
O Pietquin – Data-Driven Methods for Adaptive Spoken Dialogue …, 2012 – Springer
Skip to main content Skip to sections This service is more advanced with JavaScript available, learn more at http://activatejavascript.org …
Applying Topic Recognition to Spoken Language in Human-Robot Interaction Dialogues
M Giuliani, T Marschall, M Tscheligi – Proceedings of the 2014 …, 2014 – dl.acm.org
… to serve the correct drink. However, the sys- tem evaluations of the JAMES robot bartender [1, 2, 4] have shown that customers, besides ordering drinks, also want to 1http://www.james-project.eu Permission to make digital or …
ICMI 2014 Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction
ME Foster, M Giuliani, R Petrick – … of the 16th International Conference on …, 2014 – dl.acm.org
… towards multi-party, short-term, dynamic human-robot interaction, and the ICSR 2011 tutorial on Joint action for social robotics: how to build a robot that works together with several humans; details on the whole workshop series are available through http://www.james-project.eu …
Separating Representation, Reasoning, and Implementation for Interaction Management: Lessons from Automated Planning
ME Foster, RPA Petrick – Dialogues with Social Robots, 2017 – Springer
… Notes. Acknowledgements. This research has been partially funded by the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant no. 270435 (JAMES, http://james-project.eu/) and grant no …
Towards a Scientific Foundation for Engineering Cognitive Systems.
HG Stork – AAAI Fall Symposium: Biologically Inspired …, 2009 – pdfs.semanticscholar.org
… inrialpes.fr/) – HUMANOBS (“Humanoids that learn socio-communicative skills by obser- vation”, http://www.humanobs.org/) – JAMES (“Joint action for multimodal embodied social systems”, http:// james-project.eu) – NIFTi (“Natural …
Social behavior recognition using body posture and head pose for human-robot interaction
A Gaschler, S Jentzsch, M Giuliani… – … Robots and Systems …, 2012 – ieeexplore.ieee.org
… A. Knoll is with the Department of Informatics, Technische Universität München, Munich, Germany. 1http://www.james-project.eu Fig. 1. The JAMES robot is able to recognize the intentions of humans in a multi-party bar scenario …
Planning perception and action for cognitive mobile manipulators
A Gaschler, S Nogina, RPA Petrick… – Intelligent Robots and …, 2014 – spiedigitallibrary.org
… This research was supported in part by the European Union’s 7th Framework Programme through the JAMES project (Joint Action for Multimodal Embodied Social Systems, grant no. 270435, http://www.james-project.eu/). REFERENCES. 1 …
Plan-based social interaction with a robot bartender
RPA Petrick, ME Foster – Proceedings of the ICAPS 2013 Application …, 2013 – Citeseer
Page 16. Plan-Based Social Interaction with a Robot Bartender Ronald PA Petrick School of Informatics University of Edinburgh Edinburgh EH8 9AB, Scotland, UK rpetrick@ inf. ed. ac. uk Mary Ellen Foster School of Mathematical …
Using ellipsis detection and word similarity for transformation of spoken language into grammatically valid sentences
M Giuliani, T Marschall, A Isard – … of the 15th Annual Meeting of the …, 2014 – aclweb.org
… The JAMES robot grammar was initially very restricted, and therefore during grammar devel- opment as well as during the user studies that 1http://www.james-project.eu Figure 1: The robot bartender of the JAMES project interacting with a customer …
Two people walk into a bar: Dynamic multi-party social interaction with a robot agent
ME Foster, A Gaschler, M Giuliani, A Isard… – Proceedings of the 14th …, 2012 – dl.acm.org
Page 1. Two People Walk Into a Bar: Dynamic Multi-Party Social Interaction with a Robot Agent Mary Ellen Foster1,? Andre Gaschler2 Manuel Giuliani2 Amy Isard3 Maria Pateraki4 Ronald PA Petrick3 1 School of Mathematical …
Using embodied multimodal fusion to perform supportive and instructive robot roles in human-robot interaction
M Giuliani, A Knoll – International Journal of Social Robotics, 2013 – Springer
We present a robot that is working with humans on a common construction task. In this kind of interaction, it is important that the robot can perform different roles in order to realise an efficient c.
Responsive fingers—capacitive sensing during object manipulation
S Mühlbacher-Karrer, A Gaschler… – Intelligent Robots and …, 2015 – ieeexplore.ieee.org
… tion. A high-quality version of the video can be found at: https://youtu.be/htc3lj8Los0 ACKNOWLEDGMENT This research was in part supported by the European Com- mission through the project JAMES (FP7-270435-STREP, www.james-project.eu) …
Social state recognition and knowledge-level planning for human-robot interaction in a bartender domain
RPA Petrick, ME Foster, A Isard – AAAI 2012 Workshop on Grounding …, 2012 – aaai.org
… underway. This work forms part of a larger project called JAMES, Joint Action for Multimodal Embodied Social Systems, exploring social interaction with embodied robot systems.1 1See http://james-project.eu/ for more information. 32 …
Social Signal Recognition Using Body Posture and Head Pose for Human-Robot Interaction
A Gaschler, S Jentzsch, M Giuliani, K Huth, J de Ruiter… – uni-bi.de
… A. Knoll is with the Department of Informatics, Technische Universität München, Munich, Germany. 1http://www.james-project.eu Fig. 1. The JAMES robot is able to recognize the intentions of humans in a multi-party bar scenario …
Impact of Robot Actions on Social Signals and Reaction Times in HRI Error Situations
R Buchner, M Tscheligi – … , ICSR 2015, Paris, France, October 26 …, 2015 – books.google.com
Page 479. Impact of Robot Actions on Social Signals and Reaction Times in HRI Error Situations Nicole Mirnig (B), Manuel Giuliani, Gerald Stollnberger, Susanne Stadler, Roland Buchner, and Manfred Tscheligi Center for Human …
Robot task planning with contingencies for run-time sensing
A Gaschler, R Petrick, T Kröger… – … on Robotics and …, 2013 – mediatum.ub.tum.de
… Walk Into a Bar: Dynamic Multi-Party Social Interaction with a Robot Agent,” in Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), 2012. 2http://www.james-project.eu/ (accessed Mar. 17, 2013)
Automatically Classifying User Engagement for Dynamic Multi-party Human–Robot Interaction
ME Foster, A Gaschler, M Giuliani – International Journal of Social …, 2017 – Springer
A robot agent designed to engage in real-world human–robot joint action must be able to understand the social states of the human users it interacts with in order to behave appropriately. In particula.
Planning for Social Interaction in a Robot Bartender Domain.
RPA Petrick, ME Foster – ICAPS, 2013 – aaai.org
… Second, while most so- cial robotics systems deal with one-on-one interactive situ- 1See http://james-project.eu/ for more information. A customer approaches the bar and looks at the bartender ROBOT: [Looks at Customer 1] How can I help you …
Handling uncertain input in multi-user human-robot interaction
S Keizer, ME Foster, A Gaschler… – Robot and Human …, 2014 – ieeexplore.ieee.org
… 270435, JAMES: Joint Action for Multimodal Embodied Social Systems (james-project.eu). References [1] M. White, “Efficient realization of coordinate structures in Combina- tory Categorial Grammar,” Research on Language and Computation, vol. 4, no. 1, pp. 39–75, 2006 …
What would you like to drink? Recognising and planning with social states in a robot bartender domain
RPA Petrick, ME Foster – Proceedings of the 8th International Conference …, 2012 – aaai.org
… concrete actions for each output channel that correctly realises high-level plans. 1See http://james-project.eu/ for more information. Figure 3: System architecture The robot hardware consists of a pair of manipulator arms with …
Validating attention classifiers for multi-party human-robot interaction
ME Foster – Proceedings of the 2014 ACM/IEEE …, 2014 – workshops.acin.tuwien.ac.at
… 270435, JAMES: Joint Action for Multimodal Embodied Social Systems (james-project.eu). 7. REFERENCES [1] Weka primer. http://weka.wikispaces.com/Primer. [2] D. Aha and D. Kibler. Instance-based learning algorithms. Machine Learning, 6:37–66, 1991 …
Evaluating supportive and instructive robot roles in human-robot interaction
M Giuliani, A Knoll – International Conference on Social Robotics, 2011 – Springer
… In: Intelligent Human Computer Systems for Crisis Response and Management (ISCRAM 2007), Delft, Netherlands (May 2007) 1 http://www6.in.tum.de/Main/ResearchJast 2 http://www.james- project.eu Page 11. Evaluating Supportive and Instructive Robot Roles 203 …
Robot task and motion planning with sets of convex polyhedra
A Gaschler, R Petrick, T Kröger… – … Science and Systems …, 2013 – mediatum.ub.tum.de
… Two People Walk Into a Bar: Dynamic Multi-Party Social Interaction with a Robot Agent. In Proceedings of the ACM International Conference on Multimodal Interac- tion (ICMI), 2012. 2http://www.james-project.eu/ (accessed May 20, 2013) Page 6. A B remove(bottle3) C D E …
Training and evaluation of an MDP model for social multi-user human-robot interaction
S Keizer, ME Foster, O Lemon, A Gaschler… – Proceedings of the …, 2013 – aclweb.org
… The research leading to these results has received funding from the European Union’s Seventh Frame- work Programme (FP7/2007–2013) under grant agreement no. 270435, JAMES: Joint Action for Multimodal Embodied Social Systems, http:// james-project.eu …
Knowledge-Level Planning for Task-Based Social Interaction
RPA Petrick, ME Foster – Workshop of the UK Planning and …, 2012 – research.ed.ac.uk
… To address these challenges, the system architecture builds on a standard three-layer structure: the low level deals 1See http://james-project.eu/ for more information. Real World Visual Processing Speech Recogniser Parser …
Extending knowledge-level planning with sensing for robot task planning
RPA Petrick, A Gaschler – Proceedings of PlanSIG, 2014 – research.ed.ac.uk
Page 1. Edinburgh Research Explorer Extending Knowledge-Level Planning with Sensing for Robot Task Planning Citation for published version: Petrick, RPA & Gaschler, A 2014, Extending Knowledge-Level Planning with Sensing for Robot Task Planning …
MOPL: A multi-modal path planner for generic manipulation tasks
S Jentzsch, A Gaschler, O Khatib… – Intelligent Robots and …, 2015 – ieeexplore.ieee.org
… 23, no. 7-8, pp. 729–746, 2004. 3http://www.james-project.eu/ 4http://www.smerobotics.org/ 5https://www.humanbrainproject.eu/ [10] J. Barry, K. Hsiao, LP Kaelbling, and T. Lozano-Pérez, “Ma- nipulation with multiple action types,” in Experimental Robotics, ser …
Intelligent Management of Hierarchical Behaviors Using a NAO Robot as a Vocational Tutor
SS Goenaga Silvera – 2017 – manglar.uninorte.edu.co
Page 1. Universidad del Norte Department of Electrical and Electronics Engineering Intelligent Management of Hierarchical Behaviors Using a NAO Robot as a Vocational Tutor By Selene Sol Goenaga Silvera MASTER THESIS Advisor Dr. Christian G. Quintero M …
Kinect-enabled activity recognition of multiple human actors for a service robot
S Jentzsch – 2011 – mediatum.ub.tum.de
Page 1. FAKULTÄT FÜR INFORMATIK TECHNISCHE UNIVERSITÄT MÜNCHEN Bachelor’s Thesis in Informatics Kinect-enabled activity recognition of multiple human actors for a service robot Sören Jentzsch Page 2. Page 3. FAKULTÄT FÜR INFORMATIK …
How can i help you’: comparing engagement classification strategies for a robot bartender
ME Foster, A Gaschler, M Giuliani – Proceedings of the 15th ACM on …, 2013 – dl.acm.org
Page 1. How Can I Help You? Comparing Engagement Classification Strategies for a Robot Bartender Mary Ellen Foster School of Mathematical and Computer Sciences Heriot-Watt University, Edinburgh, UK MEFoster@hw.ac.uk …
Comparing task-based and socially intelligent behaviour in a robot bartender
M Giuliani, R Petrick, ME Foster, A Gaschler… – Proceedings of the 15th …, 2013 – dl.acm.org
Page 1. Comparing Task-Based and Socially Intelligent Behaviour in a Robot Bartender Manuel Giuliani1 Ronald PA Petrick2 Mary Ellen Foster3 Andre Gaschler1 Amy Isard2 Maria Pateraki4 Markos Sigalas4 1 fortiss GmbH …
The bounding mesh algorithm
A Gaschler, Q Fischer, A Knoll – 2015 – mediatum.ub.tum.de
… Silhouette mapping. Computer Science Technical Report TR-1-99, Harvard University, 1999. 1http://www.boundingmesh.com/ 2http://www.james-project.eu/ 3http://www.smerobotics.org/ 9 Page 11. [10] Hugues H Hoppe. Progressive hulls, July 1 2003. US Patent 6,587,104 …
Technischer
A Gaschler, Q Fischer, A Knoll – 2015 – pdfs.semanticscholar.org
… Silhouette mapping. Computer Science Technical Report TR-1-99, Harvard University, 1999. 1http://www.boundingmesh.com/ 2http://www.james-project.eu/ 3http://www.smerobotics.org/ 9 Page 11. [10] Hugues H Hoppe. Progressive hulls, July 1 2003. US Patent 6,587,104 …
Intuitive robot tasks with augmented reality and virtual obstacles
A Gaschler, M Springer, M Rickert… – … and Automation (ICRA) …, 2014 – ieeexplore.ieee.org
… 57, no. 1, pp. 37–40, 2008. 3http://www.james-project.eu/ 4http://www.smerobotics.org/ [3] R. Zollner, T. Asfour, and R. Dillmann, “Programming by demonstra- tion: Dual-arm manipulation tasks for humanoid robots,” in Intelligent Robots and Systems. IEEE/RSJ Intl Conf on, vol …
KABouM: Knowledge-Level Action and Bounding Geometry Motion Planner
A Gaschler, RPA Petrick, O Khatib, A Knoll – Journal of Artificial Intelligence …, 2018 – jair.org
Page 1. Journal of Artificial Intelligence Research 61 (2018) 323-362 Submitted 04/17; published 02/18 KABouM: Knowledge-Level Action and Bounding Geometry Motion Planner Andre Gaschler gaschlera@gmail.com fortiss …
Machine Learning for Social Multiparty Human–Robot Interaction
S Keizer, M Ellen Foster, Z Wang… – ACM transactions on …, 2014 – dl.acm.org
… Since these signals tend to be noisy, an additional challenge is for the robot behavior to be robust to uncertainty. This work is supported by the EU FP7 project JAMES (Joint Action for Multimodal Embodied Social Systems; see http://james-project.eu/), under grant no. 270435 …