Agenda - October 10, 2016, Deajeon

As the goal of the workshop is to challenge existing and formulate new concepts of shared autonomy, on the way identifying relevant key research questions we foresee discussions after each session of talks. The final discussion will additionally wrap up all the issues raised during the discussions over the whole day.

Please Note: due to last minute cancellations this workshop has been changed to a half day workshop starting at 12:30!

Location: Room 101

Time Title Authors
12:30 - 13:00 Welcome and Introduction
13:00 - 13:45 Adaptive Long-Term Autonomy - Empowering end-users of Autonomous Systems Marc Hanheide
13:45 - 14:30 Shared Autonomy: Some Facets and Approaches Helge Ritter
14:30 - 15:15 Shared Autonomy in Programming by Demonstration Approaches Lucia Ureche
15:15 - 15:30 Discussion
15:30 - 16:00 Coffee Break
16:00 - 16:45 Techniques for Robot Navigation and Manipulation Amongst People Wolfram Burgard
16:45 - 17:30 The Interactive iCub, Physical Interaction in Human Environment Giorgio Metta
17:30 - 18:00 Final Discussion
18:00 End


Abstracts

Adaptive Long-Term Autonomy - Empowering end-users of Autonomous Systems

Marc Hanheide

Autonomous Robotic Systems are on the verge of making real impact in society. In the STRANDS project, we are studying robots that operate autonomously for weeks and months in care and security applications. They learn from their experience and the day-to-day interactions they have with their users. But while we focused on full autonomy at the beginning of the project, our focus has somehwat shifted towards shared and user-adaptive autonomy in the course of the development, as we were responding to end-user needs and requirements. A key lesson learned is that full autonomy is certainly possible and scientifically relevant and challenging, perceived and actual safety, usefulness, and acceptance are all increased when users are empowered to partially take control themselves. However, the balance between user-exercised control and full autonomy is a thin line, with appropriate interfaces and levels of interaction requiring careful design. In this talk, I will present how in STRANDS we tried to balance on that line, satisfying a heterogeneous group of end users. I will focus on how we moved from shared autonomy to a concept of adaptive autonomy, where the autonomous system exploits its experience from long-term operation and interaction to improve its services and decision making.

- http://strands-project.eu

- https://lcas.lincoln.ac.uk/wp/

- http://www.hanheide.net/

Biography, see http://www.hanheide.net/p/marcs-cv.html


Techniques for Robot Navigation and Manipulation amongst People

Wolfram Burgard

In the future, robots are envisioned to coexist with people and offer various services to them including navigation and manipulation tasks. In this talk, I will present solutions to different problems to be solved when robots have to perform tasks for users. First, I will introduce a method for adapting the navigation strategy of a mobile robot to become more user friendly. In addition, I will describe how a robot can learn user preferences for everyday manipulation tasks. Furthermore, I will introduce a brain-controlled robot system that can perform manipulation tasks in a shared autonomy setting. At the end, I will present a robot system that can reliably navigate in urban city environments.

Biography, see: http://www2.informatik.uni-freiburg.de/~burgard/


Shared Autonomy in Programming by Demonstration Approaches

Lucia Ureche

Most daily activities require two arms working in physical coordination towards a common goal. In our work we take a Programming by Demonstration approach for obtaining a constraint-based representation of such tasks. Based on kinesthetic recordings of humans manipulating robotic arms or sensorized tools we learn the task structure, consisting of: the sequence of actions, their corresponding constraints in terms of control variables, stiffness modulation, target objects, and the high level goal. This allows the robot to autonomously perform a task that cannot be fully specified through language or high-level instructions and that requires experiencing precise interactions with the objects in the environment. We then extend this approach to collaborative tasks by assessing a user’s specific way of executing the actions. Based on this additional information the robot can perform complementary parts of an action in collaboration with a human. Lastly we extend our focus on force patterns to analyzing bi-directional handovers. We introduce a human-inspired controller for active handovers based on modeling the dynamics in a continuous and time-independent manner.

Biography, see: http://lasa.epfl.ch/people/member.php?SCIPER=213514