Applied Informatics Group

Perception and Interpretation

Visual Scene Interpretation

Visual scene interpretation is an important ability for intelligent systems. We realize this ability as a complete loop from extracting bottom-up information from visual data, fusing these information with top-down knowledge, and improving bottom-up processing by feeding back interpretations as relevance or context information.

Multimodal Recognition of Socio-emotional Signals

The interpretation of facial expressions, head gestures, and prosodic information are important non-verbal cues for intelligent systems. The usage of these cues enables such a system to gain information about the mental state of the user and the quality of an interaction.

Perception of Humans

Multi-modal perception of an agent's environment is crucial for interacting naturally with human partners in non-laboratory settings. This applies to robots as well as to intelligent smart-homes or interactive virtual agents. We use multi-modal perception of activities on a global and local human-centered level.

Recent Best Paper/Poster Awards

Goal Babbling of Acoustic-Articulatory Models with Adaptive Exploration Noise
Philippsen A, Reinhart F, Wrede B (2016)
International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob) 

 

Are you talking to me? Improving the robustness of dialogue systems in a multi party HRI scenario by incorporating gaze direction and lip movement of attendees
Richter V, Carlmeyer B, Lier F, Meyer zu Borgsen S, Kummert F, Wachsmuth S, Wrede B (2016)
International Conference on Human-agent Interaction (HAI) 

 

"Look at Me!": Self-Interruptions as Attention Booster?
Carlmeyer B, Schlangen D, Wrede B (2016)
International Conference on Human Agent Interaction (HAI)
PUB | DOI

 

For members