• Start
  • News
  • More digital participation through artificial intelligence

More digital participation through artificial intelligence

Avatar making of 01

The Bavarian Broadcasting Corporation (BR) television program “Sehen statt Hören” (Seeing Instead of Hearing) in German Sign Language (DGS) is subtitled and provides insights into the AVASAG research project on which the sign language avatar for municipalities and companies is based.

Avatars & Co. – More digital participation through artificial intelligence: Digital assistants, voice control, and artificial intelligence are already making life easier for many people – but what about deaf people? If light switches, ticket orders, or autonomous vehicles could be controlled by commands in German Sign Language in addition to voice commands, digital participation would be redefined. Avatars and AI could help here: a barrier-free future in which technology reaches everyone. But how close are we to this vision of a modern, digital world?

Advancing digitalization is increasingly demanding barrier-free solutions – not only in public spaces, but also online. From 2025, the German Barrier-Free Accessibility Act (BFSG) will require all digital products and services offered across Europe to be barrier-free. Unfortunately, many areas, such as the “smart home,” have not yet been adequately addressed. Although Germany has committed itself to comprehensive accessibility, it has only partially implemented the EU requirements so far. To close existing gaps, various model projects are currently working on AI-based approaches that aim to enable digital participation for all.

The research project AVASAG (Avatar-based Voice Assistant for Automated Sign Language Translation) is working on a digital avatar that will reproduce information in German Sign Language in real time. The aim is to enable barrier-free communication in areas such as travel information and tourism. With the help of a special sensor suit, sign movements are captured in 3D and used to train the AI. The avatar is not only intended to imitate signs, but also to understand their system so that it can react flexibly – for example, if a train connection changes spontaneously.

Project manager Alexander Stricker emphasizes the close cooperation with the deaf community in order to develop a practical, accepted solution. Despite all the progress made, one thing remains clear: the avatar cannot replace human interpreters, but is intended to complement and support them, thereby contributing to greater digital accessibility.

 

For updates on our latest developments:

Also interesting:
How is a sign language video created?
How does quality assurance work at alangu?
How do we teach our AI sign language?