#ubisoftcom
**Animating on Runtime: Enabling Dynamic and Realistic Experiences in Digital Humans**
“Animating on runtime” in the context of digital humans generally refers to the process where animation – movement, gestures, facial expressions, etc., of a digital human – is generated in real-time during the execution of a software, application, or game, instead of being pre-rendered or pre-animated.
In other words, the character’s animations are dynamically created on the fly based on real-time inputs or situations. This could include user interaction, system-derived instructions, environmental changes, or AI-driven behavior.
For example, a digital human in a video game may be programmed to react differently based on the player’s actions. Instead of having a pre-determined, fixed set of animations, the game software can generate appropriate animations in real-time based on what’s happening in the game. This is often facilitated by advanced AI technologies.
The GitHub repository for ZeroEGGS (Zero-shot Example-based Gesture Generation from Speech) by Ubisoft La Forge represents a significant development in the context of animating on runtime. This project utilizes machine learning to generate realistic human gestures from speech input in real-time.
ZeroEGGS works by taking monologue sequences, performed by a female actor in English, and applying 19 different styles of motion. The different styles include emotions and behaviors like agreement, anger, disagreement, and happiness among others. Once trained on this dataset, the model can then generate gestures on the fly based on the speech input it receives.
This technology allows for the creation of more dynamic and realistic digital humans in real-time applications. For instance, in video games or other interactive media, characters can respond to spoken inputs with appropriate gestures, which are not pre-animated, but rather generated at runtime.
https://github.com/ubisoft/ubisoft-laforge-ZeroEGGS
https://github.com/ubisoft/ubisoft-laforge-ZeroEGGS