Learning by Interacting
The interaction with a human offers great opportunity for the robot to achieve various skills. But as human behavior is sometimes unpredictable, this approach involves several difficulties. To be capable to react to unexpected situations, future systems need the ability to extend their knowledge during the course of interaction.
Robots or smart environments are currently not able to exploit the full bandwith of human communication cues. Especially, when learning from interaction, multimodal information is crucial for developing the necessary representations.
On the one hand our research concentrates on the problem of structuring and understanding multi-modal action demonstrations using so called Acoustic Packaging (see below).
Another important direction of our research is to apply learning methods that allow the robot to develop a model of itself and the environment that can adapt or specialize during the robot's life-time. This follows the paradigm of Developmental Learning (see below).
A third line of research focuses on the Interaction loop between Tutor and Learner. We investigate how tutor's present some information to the learner/recipient (child, robot) and explore to which extent a robot could use strategies (e.g. gaze cues) to pro-actively contribute to shaping the tutor's presentation    .
Research on child development has shown that the temporal relations of events in the acoustic and visual modality have a significant impact on how this information is processed. Particularly, temporally overlapping events seem to have a stronger effect on action and language learning than non-overlapping events. This idea has been proposed by Hirsh-Pasek and Golinkoff (1996) as acoustic packaging. They suggest that acoustic information, typically in the form of narration, overlaps with action sequences and provides infants with a bottom-up guide to attend to relevant parts and to find structure within them.
Our computational model of acoustic packaging binds visual and acoustic events to acoustic packages based on their temporal overlap . The model of acoustic packaging is able to segment action demonstrations into multimodal units which are called acoustic packages. These units facilitate measuring the level of structuring in action demonstrations. In addition to action segmentation, our model of acoustic packaging is able to flexibly integrate additional sensory cues to acquire first knowledge about the content of action demonstrations  . Furthermore, the acoustic packaging system was designed to process input online, which enables it to provide feedback to users engaging in an interaction with a robot.
In human robot interaction the human is likely to produce unexpected behavior at some point. In order for the robot to include new experiences in learning, the learning method has to be designed accordingly. Developmental learning tries to overcome preprogrammed behavior: motor or communication skills are instead acquired in an emergent, infant-like way.
We proposed a model that uses autonomous, target-oriented exploration to acquire speech production skills , . The overarching vision is to implement a model that learns to produce speech from scratch, having available only its own vocal tract and acoustic experiences from its environment.
The proposed developmental model of speech acquisition could enable robots to learn to produce a variety of different speech sounds in a human like way. Additionally, it constitutes a framework for investigating mechanisms that humans (especially infants) make use of when learning how to speak, allowing to gain important insights, e.g. concerning the role of hyperarticulation in infancy .
2014 | Journal Article | PUB-ID: 2662158Tutoring in adult-child-interaction: On the loop of the tutor's action modification and the recipient's gazePUB | PDF | DOI | WoS
Pitsch K, Vollmer A-L, Rohlfing K, Fritsch J, Wrede B (2014)
Interaction Studies 15(1): 55-98.
2013 | Journal Article | PUB-ID: 2622414Robot feedback shapes the tutor's presentation. How a robot's online gaze strategies lead to micro-adaptation of the human's conductPUB | PDF | DOI | WoS
Pitsch K, Vollmer A-L, Muehlig M (2013)
Interaction Studies 14(2): 268-296.
2012 | Journal Article | PUB-ID: 2604372Tutor spotter: Proposing a feature set and evaluating it in a robotic systemPUB | DOI | WoS
Lohan KS, Rohlfing K, Pitsch K, Saunders J, Lehmann H, Nehaniv CL, Fischer K, Wrede B (2012)
International Journal of Social Robotics 4(2): 131-146.
2011 | Conference Paper | PUB-ID: 2144469Using Prominence Detection to Generate Acoustic Feedback in Tutoring ScenariosPUB | PDF
Schillingmann L, Wagner P, Munier C, Wrede B, Rohlfing K (2011)
In: Interspeech 2011 (12th Annual Conference of the International Speech Communication Association). 3105-3108.
2013 | Book Chapter | PUB-ID: 2605326Making Use of Multi-Modal Synchrony: A Model of Acoustic Packaging to Tie Words to ActionsPUB | DOI
Wrede B, Schillingmann L, Rohlfing K (2013)
In: Theoretical and Computational Models of Word Learning: Trends in Psychology and Artificial Intelligence. Gogate L, Hollich G (Eds); IGI Global: 224-240.
2015 | Conference Paper | PUB-ID: 2759013Efficient Bootstrapping of Vocalization Skills Using Active Goal BabblingPUB | PDF
Philippsen A, Reinhart F, Wrede B (2015)
Presented at the International Workshop on Speech Robotics at Interspeech 2015, Dresden, Germany.
2016 | Conference Paper | PUB-ID: 2904706Goal Babbling of Acoustic-Articulatory Models with Adaptive Exploration NoisePUB | PDF
Philippsen A, Reinhart F, Wrede B (2016)
Presented at the Sixth Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob), Cergy-Pontoise / Paris, France.
2017 | Conference Paper | PUB-ID: 2909223Hyperarticulation Aids Learning of New Vowels in a Developmental Speech Acquisition ModelPUB
Philippsen A, Reinhart F, Wrede B, Wagner P (Accepted)
Presented at the International Joint Conference on Neural Networks, Anchorage, Alaska.
Recent Best Paper/Poster Awards
Philippsen A, Reinhart F, Wrede B (2016)
International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob)
Richter V, Carlmeyer B, Lier F, Meyer zu Borgsen S, Kummert F, Wachsmuth S, Wrede B (2016)
International Conference on Human-agent Interaction (HAI)
Carlmeyer B, Schlangen D, Wrede B (2016)
International Conference on Human Agent Interaction (HAI)