Applied Informatics Group

Learning by Interacting

Humans exchange various types of information when they interact with each other. Therefore, we analyze human behavior and human robot interaction. Our aim is that systems learn from interactions with humans.

Examples are robots that learn action and language by interacting with a tutor and smart homes that learn rules from human behavior.

Research Questions

The interaction with a human offers great opportunity for the robot to achieve various skills. But as human behavior is sometimes unpredictable, this approach involves several difficulties. To be capable to react to unexpected situations, future systems need the ability to extend their knowledge during the course of interaction.

Robots or smart environments are currently not able to exploit the full bandwith of human communication cues. Especially, when learning from interaction, multimodal information is crucial for developing the necessary representations.

On the one hand our research concentrates on the problem of structuring and understanding multi-modal action demonstrations using so called Acoustic Packaging (see below).

Another important direction of our research is to apply learning methods that allow the robot to develop a model of itself and the environment that can adapt or specialize during the robot's life-time. This follows the paradigm of Developmental Learning (see below).

A third line of research focuses on the Interaction loop between Tutor and Learner. We investigate how tutor's present some information to the learner/recipient (child, robot) and explore to which extent a robot could use strategies (e.g. gaze cues) to pro-actively contribute to shaping the tutor's presentation [1] [2] [3] [4].

Acoustic Packaging

Research on child development has shown that the temporal relations of events in the acoustic and visual modality have a significant impact on how this information is processed. Particularly, temporally overlapping events seem to have a stronger effect on action and language learning than non-overlapping events. This idea has been proposed by Hirsh-Pasek and Golinkoff (1996) as acoustic packaging. They suggest that acoustic information, typically in the form of narration, overlaps with action sequences and provides infants with a bottom-up guide to attend to relevant parts and to find structure within them.

Our computational model of acoustic packaging binds visual and acoustic events to acoustic packages based on their temporal overlap [5]. The model of acoustic packaging is able to segment action demonstrations into multimodal units which are called acoustic packages. These units facilitate measuring the level of structuring in action demonstrations. In addition to action segmentation, our model of acoustic packaging is able to flexibly integrate additional sensory cues to acquire first knowledge about the content of action demonstrations [6] [7]. Furthermore, the acoustic packaging system was designed to process input online, which enables it to provide feedback to users engaging in an interaction with a robot.

Developmental Learning

In human robot interaction the human is likely to produce unexpected behavior at some point. In order for the robot to include new experiences in learning, the learning method has to be designed accordingly. Developmental learning tries to overcome preprogrammed behavior: motor or communication skills are instead acquired in an emergent, infant-like way.

We proposed a model that uses autonomous, target-oriented exploration to acquire speech production skills [8], [9]. The overarching vision is to implement a model that learns to produce speech from scratch, having available only its own vocal tract and acoustic experiences from its environment.

The proposed developmental model of speech acquisition could enable robots to learn to produce a variety of different speech sounds in a human like way. Additionally, it constitutes a framework for investigating mechanisms that humans (especially infants) make use of when learning how to speak, allowing to gain important insights, e.g. concerning the role of hyperarticulation in infancy [10].



Britta Wrede

Related Publications

Recent Best Paper/Poster Awards

Goal Babbling of Acoustic-Articulatory Models with Adaptive Exploration Noise
Philippsen A, Reinhart F, Wrede B (2016)
International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob) 


Are you talking to me? Improving the robustness of dialogue systems in a multi party HRI scenario by incorporating gaze direction and lip movement of attendees
Richter V, Carlmeyer B, Lier F, Meyer zu Borgsen S, Kummert F, Wachsmuth S, Wrede B (2016)
International Conference on Human-agent Interaction (HAI) 


"Look at Me!": Self-Interruptions as Attention Booster?
Carlmeyer B, Schlangen D, Wrede B (2016)
International Conference on Human Agent Interaction (HAI)


For members