Submitted
2025 Global Learning Challenge

3D Signing Avatars

Team Leader
Melissa Malzkuhn
We create fluent 3D signing avatars and characters for an array of purposes, from immersive learning experiences in VR/AR/XR to original animated films. We create the 3D avatars through motion capture, a preferred method over animating by keyframes (or "animating by hand"). We animate in post-production for shading, lighting, and rendering, as well as to add background and create worlds....
What is the name of your organization?
Motion Light Lab (at Gallaudet University)
What is the name of your solution?
3D Signing Avatars
Provide a one-line summary or tagline for your solution.
Creating sign language experiences with quality 3D signing avatars through AI and motion capture technologies, for Deaf communities.
In what city, town, or region is your solution team headquartered?
Washington, DC, USA
In what country is your solution team headquartered?
USA
What type of organization is your solution team?
Other, including part of a larger organization (please explain below)
If you selected Other, please explain here.
Motion Light Lab is a creative research and development lab at Gallaudet University. Gallaudet University is the home institution, partially funding Motion Light Lab and providing administrative, operational, and financial oversight.
Film your elevator pitch.
What specific problem are you solving?
We aim to build signing and Deaf representation in 3D contexts (entertainment, media, education, and access). Deaf communities are vastly underrepresented and underserved. 95% of deaf children are born to hearing families, and 80% of the families do not learn sign language. This results in language deprivation, which causes lifelong harm and poor literacy skills. To counter that, we create engaging, immersive experiences. Our broader impact is to advance Deaf representation and narratives. Our work focuses on creating accessible sign language content, using signing avatars, to ensure representation and access in any type of 3D format and context. Currently, the quality of signing avatars is not standardized, but very varied, and Deaf communities have expressed frustration in trying to understand the signing avatars. However, signing avatars have a huge implication in accessibility, through text-to-sign translations, or speech-to-sign translations. Once we tackle the problem of avatar quality and make the production of avatars easier, then accessibility solutions will abound aplenty!
What is your solution?
We create fluent 3D signing avatars and characters for an array of purposes, from immersive learning experiences in VR/AR/XR to original animated films. We create the 3D avatars through motion capture, a preferred method over animating by keyframes (or "animating by hand"). We animate in post-production for shading, lighting, and rendering, as well as to add background and create worlds. However, to capture fluency in sign language, all the nuances in movement, and facial expressions, motion capture is the best way to do this. We are working to build an AI model to process, render, and clean up the raw motion capture data, to cut down the post-processing time, and to build a predictive model based on the processing of the motion capture data. Our solution will then have two models for impact: (1) effective processing of motion capture data in sign language, readying for post-production animation, (2) datasets from the processing of 3D sign language data to be used to predict sign language grammar, syntax, and movements for sign language translations to text, speech, and any other uses.
Who does your solution serve, and in what ways will the solution impact their lives?
There are 430 million Deaf people in the world. Only 3% have access to education in sign language, according to the World Federation of the Deaf. More than 70% experience limited access to language, be it signed or written, and experience language deprivation. With the proliferation of AI access solutions such as speech-to-text and notetaking, and the available datasets and models, the next step would be to build sign language accessible contexts, and this is where 3D avatars come in. In terms of translations and educational content in sign language using 3D avatars will be transformative to Deaf people and communities.
Solution Team:
Melissa Malzkuhn
Melissa Malzkuhn
Founder & Director
Richard Bailey
Richard Bailey
Mike Kang
Mike Kang
Tayla Newman
Tayla Newman