Applied Informatics Group

The Curious Robot

Many approaches for robot learning observe the human, to emulate or otherwise learn from human behavior. However, often humans do not know how to interact with the robot and furthermore, visual analysis of humans is hard. Thus, in the curious robot scenario, we have turned the usual paradigm around: In this scenario, the robot asks the human questions about objects, its environment, and so on.

Multi-Modal Grasp-Learning

  The latest iteration of the Curious Robot scenario focuses on two issues: 1) more intuitive grasp teaching and 2) continuous feedback on the interaction state. These two aspects featured prominently in our previous interaction studies and adding them has been much anticipated. Preliminary tests were already quite successful and we're currently preparing more in-depth studies.

For grasp teaching, we use a CyberGlove II hand-posture sensor, which allows people to demonstrate a grasp naturally -- by performing it. Grasps are categorized into two types at the moment: Power grasp and precision grip.

You are missing some Flash content that should appear here! Perhaps your browser cannot display it, or maybe it did not initialize correctly.

Mixed-Initiative in Object Learning

The video below shows the experimental setup, with the big PA-10 arm and the Shadow hand in the foreground and the humanoid torso BARTHOC in the background.
We are adressing several different questions with the scenario. The first one is published at ICRA 2009 [1] and concerns how to give the human appropriate guidance, because that is actually not obvious at all.

Other questions investigated include system architecture, behavior modeling, vision for interaction and so on.

You are missing some Flash content that should appear here! Perhaps your browser cannot display it, or maybe it did not initialize correctly.


Related Publications

  1. 2009 | Conference Paper | PUB-ID: 1890379
    The Curious Robot - Structuring Interactive Robot Learning
    Lütkebohle I, Peltason J, Schillingmann L, Elbrechter C, Wrede B, Wachsmuth S, Haschke R (2009)
    In: International Conference on Robotics and Automation. IEEE: 4156-4162.
    PUB | PDF | DOI

Recent Best Paper/Poster Awards

Goal Babbling of Acoustic-Articulatory Models with Adaptive Exploration Noise
Philippsen A, Reinhart F, Wrede B (2016)
International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob) 


Are you talking to me? Improving the robustness of dialogue systems in a multi party HRI scenario by incorporating gaze direction and lip movement of attendees
Richter V, Carlmeyer B, Lier F, Meyer zu Borgsen S, Kummert F, Wachsmuth S, Wrede B (2016)
International Conference on Human-agent Interaction (HAI) 


"Look at Me!": Self-Interruptions as Attention Booster?
Carlmeyer B, Schlangen D, Wrede B (2016)
International Conference on Human Agent Interaction (HAI)


For members