Joint Learning in Human-Robot Collaboration

Vancouver, BC, Canada, September 28, 2017

As more and more robots are introduced into our everyday life and workspaces, we find new ways in which they can provide support in many different tasks. In such increasingly hybrid environments, our interactions with robots and other virtual agents will increase. This will require novel forms of cooperation and collaboration. Robots should learn to adapt to our needs over long timescales and should help free us of tedious tasks. This will require autonomous robots that don't need detailed instruction, but will operate freely within boundary conditions that specify high level goals.

Shared Autonomy focuses on how autonomous systems can successfully interact and shape each other's autonomy spaces. It is about how two or more autonomous agents mediate how they individually and jointly can contribute, on the one hand, to an overarching goal, but also, on the other hand, to their individual goals. The workshop aims, first, at addressing the underlying theoretical issues and potential models. Secondly, the main focus of the workshop is on the realization of such models in robotic systems and in answering the question how such systems can successful realize collaboration between humans and robots.

This leads to a set of research questions that we want to address in the workshop:

  1. What are models and processes (including their levels and time scales) that can help to organize a sharing of autonomy spaces?
  2. How do we represent agents' intentions as part of a robots' internal representation? This will require to (partially) integrate users' intentions and states into the robot's internal model as well as incorporate this with its own world perception, intentions and state. When robots become better at recognizing a user's intention, they can provide suitable support for achieving joint goals.
  3. As autonomy inherently limits predictability, what does this imply for the development of robust mechanisms for coordination of autonomous agents? Sharing autonomy will require new rich predictive models of humans and autonomous agents based on powerful statistical methods.
  4. How can learning be utilized in a collaborative setting in order to allow for the extension of the robot's active world model? Robots should be able to take advantage of emerging joint goal-directed behavior by enlarging their world model. Learning should continue throughout interactions and over long timescales.
  5. How can systems be autonomous and safe? In interaction with humans, systems should be guaranteed to act safely and not endanger humans or other agents. This requires formal specification for the control and with respect to the performance of the overall system.
  6. How can we model and reason on the influence of the actions of one agent on the actions or behavior of other autonomous agents in the environment? This is especially important in decentralized systems in which the agents do not share a common representation of the world state.

The workshop is targeted towards an audience of experts in human-robot interaction and collaboration. It aims to bring together different perspectives, i.e. core and cognitive robotics, service robotics, developmental robotics, social robotics and HRI. The goal is to highlight the common issues from diverse perspectives as robots become more and more autonomous in different contexts and application scenarios. In particular, the workshop shall provide a platform for the formulation of new ideas and proposals to overcome existing limitations as well as discuss future research directions and strategies.

Links