Banner Mehr erfahren

How is a sign language video created?

IMG 0302 00 02 54 03 Standbild012

How Livian learns sign language – A look behind the scenes

Avatars are versatile and support deaf people in their everyday lives. However, many view them critically and associate avatars with unnatural, synthetic movements that are generated entirely by computers. But with Livian, our digital sign language avatar, there is much more to it than that – namely, real sign language from real people. Before Livian can even speak a word, she has to learn from our deaf colleagues how to sign precisely and expressively.

No avatar without real data

To ensure that Livian's gestures appear natural and understandable, we continuously capture real sign language movements. To do this, our deaf employees wear special motion capture suits. While they sign prepared texts from the teleprompter, several cameras capture every movement, no matter how subtle, in 3D.

This technology is well known in the film and gaming industry – but for us, the focus is not on action, but on clear, correct sign language. The texts are thoroughly prepared and linguistically reviewed by a team of deaf and hearing employees. This ensures that the content is understandable to everyone.

Avatar making of 01

Precision work down to the last detail

After recording, a particularly important step begins: post-processing. Our 3D animators take the raw motion data and analyze it frame by frame. Small inaccuracies are corrected, transitions between gestures are smoothed out, and facial expressions are adjusted so that every movement is not only technically correct but also emotionally consistent.

We work closely with our sign language experts: every hand position, every facial expression, and every pause is checked—because only when everything fits together does a movement become a genuine, understandable statement in sign language.

Avatar making of 02

From real sign language to digital results

Once the animation has been edited, it is stored in our database and can be transferred to Livian. She replicates the movements of the real person almost like a mirror image – accurately and precisely. The result: Livian signs fluently, authentically, and with clear expressiveness – ready for use in accessible communication. The inclusion of deaf culture is particularly important to us, as this is the only way we can optimally respond to needs and achieve the highest possible acceptance.

Avatar making of 03

Why 3D data is so valuable

Compared to traditional video material, 3D animation data offers decisive advantages: it not only makes movements visible, but also their depth, direction, and intensity—information that is essential for a precise representation of sign language.

In addition, the data can be reused flexibly: movements can be analyzed, compared, modified, and recombined in different contexts. This means that Livian not only learns to sign individual sentences, but can also act in a modular and context-sensitive manner. This is a capability that would not be achievable with video footage alone.

 

This insight shows only a part of our work – we would be happy to answer any questions you may have or tell you more about the process.
Follow our Instagram channel @gebaerdensprachavatar and stay up to date!

Also interesting:
How does quality assurance work at alangu?
How do we teach our AI sign language?