Many approaches for robot learning observe the human, to emulate or otherwise learn from human behavior. However, often humans do not know how to interact
with the robot and furthermore, visual analysis of humans is hard. Thus, in the curious robot scenario, we have turned the usual paradigm around: In this scenario, the robot asks the human questions about objects, its environment, and so on.
The latest iteration of the Curious Robot scenario focuses on two issues: 1) more intuitive grasp teaching and 2) continuous feedback on the interaction state. These two aspects featured prominently in our previous interaction studies and adding them has been much anticipated. Preliminary tests were already quite successful and we're currently preparing more in-depth studies.
For grasp teaching, we use a CyberGlove II
hand-posture sensor, which allows people to demonstrate a grasp naturally -- by performing it. Grasps are categorized into two types at the moment: Power grasp and precision grip.
Mixed-Initiative in Object Learning
The video below shows the experimental setup, with the big PA-10
arm and the Shadow hand in the foreground and the humanoid torso
BARTHOC in the background.
We are adressing several different questions with the scenario. The first one is published at ICRA 2009 and concerns how to give the human appropriate guidance
, because that is actually not obvious at all.
Other questions investigated include system architecture, behavior modeling, vision for interaction and so on.
Curious Robot - Structuring Interactive Robot Learning
International Conference on Robotics and Automation
, Kobe, Japan,
- Nagai, Y., C. Muhl, and K. J. Rohlfing,
"Toward Designing a Robot that Learns Actions from Parental Demonstrations",
The 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, pp. 3545-3550, 19/05/2008.