Applied Informatics Group

AR-based Task Assistance

VAMPIRE (Visual Active Memory Processes and Interactive REtrieval) is a research project on cognitive computer vision which was funded by the European Union (IST-2001-34401, May 2002- July 2005). It investigates artifical intelligent systems that are able to understand what they see based on what they have previously memorised. In this sense, the research focuses on Visual Active Memory Processes. Another important goal is to develop advanced techniques for Interactive Retrieval memory spectacles.

The VAMPIRE project studies and interactive learning. Therefore, a wearable augmented reality system is realised that provides the user with cognitive assistance functions while she or he is normally acting in a room. In short, the aim of VAMPIRE is to proceed towards memory prosthetic devices or some kind of mobile assistance technologies as one of its main application scenarios. In such applications, the system assists the user in performing certain tasks or provide him with additional memorised information relevant in a particular situation. Future real world applications might include industrial assembly, remote teaching and prosthetic memory devices. Questions answered by such assistants are for instance "Where have I put my keys?" or "How do I construct this assembly?" Such use cases and the general approach of constructing visual active memory processes led to the development of "mobile augmented reality assistant systems".


In the mobile assistant scenario of VAMPIRE the user wears a mobile device that - by means of augmented reality - integrates him in the processing loop to close the perception-action cycle. Thus, the user is able to intuitively direct the focus of the system as it follow his own. The tight coupling of the system and the user allows direct interaction based on visual feedback and facilitates visual learning capabilities.
 

The systems must not only recognise and memorise the current constellation of objects, but also has to be aware of the current contextual situation and its own the spatial position. In conjunction with capabilities to anticipate of the user's intentions this system is able to selectively present information to the user he or she is interested in, leading to context aware scene augmentation.But as we are moving towards real assistant technologies that can aid the user performing tasks, additional functionality is required. Action recognition observes what the user is doing based on trajectories of manipulated objects. The learning capabilities demand for human-machine-interaction as the user has to teach the system new objects and even situations. Thus, several interaction modalities are incorporated and allow the user the reference spatial positions and objects in the scene, direct the system's attention and retrieve memorise knowledge. The modalities applied in the scenario range from speech recognition for object labelling to head and pointing gestures. Special visualisation techniques are used to redisplay visual information in augmented reality. In VAMPIRE we study all these aspects in non-artificial environments (like offices and kitchen setups) which therefore especially comprises great challenges for all vision processing (object learning and recognition and visual tracking).


 

Computer vision research as carried out in VAMPIRE is more and more shifting from algorithmic solutions to the construction of active systems by building integrated demonstrators as described above. The technical composition and functional cooperation of so many capabilities demands for a suitable system integration solution. Thus, an integration framework, the XCF software development kit, was developed that combines ideas from data- and event-driven architectures enabling researchers to easily build highly reactive distributed information systems as needed e.g. in the VAMPIRE mobile augmented reality scenario.

Although the VAMPIRE project has been successfully completed the system is still in use. In recent years the concept of Autonomic Computing (AC) arise due to the ever growing complexity of computation-intensive systems. These systems feature numerous component with complex interaction patterns often resulting in high maintenance costs or even in unacceptable complete failures. The idea of AC is to mimic the autonomous nervous system and equip a artificial systems with self-configuring, self-healing, self-adaptation and self-reconfiguring abilities thus hiding the complexity in the system itself. VAMPIRE with its sophisticated functionality provides an excellent testbed for investigating AC-concepts. For more details on the ongoing project have a look at  "An autonomic Computing Approach for systemic self-regulation".

Recent Best Paper/Poster Awards

Goal Babbling of Acoustic-Articulatory Models with Adaptive Exploration Noise
Philippsen A, Reinhart F, Wrede B (2016)
International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob) 
PUB | PDF

 

Are you talking to me? Improving the robustness of dialogue systems in a multi party HRI scenario by incorporating gaze direction and lip movement of attendees
Richter V, Carlmeyer B, Lier F, Meyer zu Borgsen S, Kummert F, Wachsmuth S, Wrede B (2016)
International Conference on Human-agent Interaction (HAI) 

 

"Look at Me!": Self-Interruptions as Attention Booster?
Carlmeyer B, Schlangen D, Wrede B (2016)
International Conference on Human Agent Interaction (HAI)
PUB | DOI

 

For members